Search This Blog

Loading...

Tuesday, 11 June 2013

Installing Maltego on MAC OS

In case this might help someone-else, if when installing Maltego (or I guess any other application that requires Java) on MAC OS and you see the error below

"LSOpenURLsWithRole() failed with error -10810 for the file"

this is because you don't have Java installed. The "user docs" actually indicate that Java is required, however, as a male I didn't read them prior to installing because the site says

"install as normal"

:)

Anyway, run something like

java -version

from the CLI and aut-magically software update will subsequently be launched. The latest version of Java will then be installed like magic :)

You can then launch Maltego, woot :D


Tuesday, 14 May 2013

LinkedIn Emails

Receiving mails via LinkedIn is an interesting experience. For example, how many folk actually personalise "contact requests" - from what I see, less than 1%. I typically try to because I think it shows some thought has gone into the request and it's friendly, but then "manners" on the Internet is a very different thing to the real world, right ;-)

Anyway, to the point of the blog post. In early November (2012), whilst I was preparing my Security Onion presentation for IrissCon (why did I bother when my MBP died on-stage), I received a very interesting and personal email via LinkedIn. The email came from a "Senior International Belief Instigator" (let's call him the SIBI - to save me typing) at Riot Games and the email was literally awesome, it hit many of the key points that you'd hope for in a recruiter email but it also had a wonderful tone. In my ignorance, I knew of League of Legends but not Riot (yes, I am embarrassed by that). I receive a lot of mails (some from so-called "top" tech companies) through LinkedIn that resemble mass mailings or show that the recruiter has done no researched, however, this mail showed that he had
  • researched my background
  • seen what I had or hadn't done in the past
  • developed a feeling for what I might like to do and I guess, developed an inkling that I may not be doing what I truly "loved"
To cut a long story short, I said (twice)

"No, thanks, sounds great but I'm not interested. I've a lot to do here where I am now."

but was eventually given the line that gets most people,

"Sure, why don't we just meet for a pint in Dublin!"

I then started doing my research, checking out the game and learning about eSports (I knew it existed but I'd no idea of the sheer size and the incredible growth) - I gradually became very excited. So on the night of the 2012 Riot Games Christmas party, I had a pint and learned a lot about the technical and engineering challenges that Riot were and are facing. I left that pub feeling like a kid in a candy store or something like that :p

Over the course of the next 6 months, I chatted to many folk within Riot (primarily on the Operations and Security teams and the SIBI, of course) and this included a very quick trip to LA for a round of interviews and dinner. That candy store soon grew to something like this - 


So despite being pretty happy in 10gen, witnessing remarkable growth, learning MongoDB, developing a security awareness/educational program and helping drive new security features, something was missing and my heart was pointing towards the challenges at Riot. 

To be honest, the overall interview/headhunting process was pretty unreal and I feel incredibly lucky and humble that Riot chose to talk with me and see if I was a fit. The one thing that struck me throughout the process (and this is very abnormal for many "awesome" techies) is how un-arrogant or humble everyone whom I talked to was. Don't get me wrong, I was drilled and I had some tough questions but the atmosphere was welcoming and relaxed (the 48 hour round-trip to LA was memorable and without jetlag).

Now I keep thinking,

"I get to play computer games at work"

Tomorrow (15th May, 2013) is day "numero uno" of many I hope and yep, I'm a little nervous but man, so excited! In my opinion, I think there are few morals to the story - 
  • Answer those LinkedIn mails, you never know what might happen.
  • If you're a recruiter, do your research and personalise your communication. You never know what might happen, plus a pint is always a good start (well, in Ireland anyway).
  • There are some cool technology jobs at the minute and luckily, for me, some are in Dublin :)
  • I'm incredibly lucky (currently waiting for someone to wake me up with a kick to the balls) and I'd two great companies who wanted me to work for them. Thanks to both 10gen (Meghan - I'm still going to do that Kerberos blog :) ) and Riot!
  • I can be bribed with a 


Friday, 3 May 2013

MongoDB Authori(s|z)ation

Introduction

Having answered numerous questions on the new and old authori(s|z)ation within MongoDB, I thought I'd write a short blog post explaining how things work as there seems to be some confusion.

What's New

Prior to version 2.4, there was a very basic sense of "Role Based Access Controls" (RBAC) within MongoDB as there were only two roles -
  • read
  • readWrite
which is quite limited. For example, if the user has "readWrite", that user is essentially "root" and the user can add/remove users as well as inserting data into the database, i.e. there is no role segregation.

Version 2.4 added in the following 3 core roles -
  • userAdmin
  • dbAdmin
  • clusterAdmin
with a notable extension such that there are now 4 roles that apply across all databases -
  • readAnyDatabase
  • readWriteAnyDatabase
  • userAdminAnyDatabase
  • dbAdminAnyDatabase
This increased RBAC is a significant improvement from a security perspective in MongoDB. It is important to note that these 4 roles above (along with clusterAdmin) can only be defined in the admin database (yes, in 2.4, the admin database is still "special"). Therefore, there are total of 9 possible roles within MongoDB (in version 2.4).

In summary,
  • userAdmin - is used to add/remove users and give permisions. This role won't be used that often but is clearly very powerful. This removes the capability of the "readWrite" user to view and modify the system.users collection.
  •  dbAdmin -  this user can perform administrative operations (across a single database), such as compacting a collection, create/remove indexes, create/remove collections, drop the database etc. You probably will use this role quite a bit.
  • clusterAdmin this user  performs various administration operations across a whole system rather than just a single database. For example, various replica set modification commands, listing of databases, enabling sharding etc.
  • I'm assuming that read and readWrite are self-explanatory.

Validation

As of version 2.4.3, it is possible to add a user with an incorrect spelling of the role. However, the operation is effectively a "no-op" because MongoDB does not verify that the role is valid. For example -

> use admin

> db.system.users.find()
{ "_id" : ObjectId("48311be63b9ed10d6c6761b1"), "user" : "userA", "pwd" : "1df873723e72b2323af3a52
f40833cdc", "roles" : [ "userAdminAnyDatabase", "clusterAdmin", "reader" ] }
{ "_id" : ObjectId("48311be63b9ed10d6c6761b2"), "user" : "userB", "pwd" : "7pqr2f0bc30c4c78f4f5ff2
3986129ac", "roles" : [ "readWriteAnyDatabase", "clusteradmin", "reader" ] }

where "userA" has an incorrect role of reader and "userB" has two incorrect roles of clusteradmin and reader.

Validation and checking of "real" user roles will come later in 2.4 as per SERVER-9446.

How Does the New Authori(s|z)ation Work?

To explain this, I'm actually going to move using "local" authentication to "external" authentication, however, the actual authentication mechanism is irrelevant. At present, MongoDB only supports Kerberos as an external authentication method and it's only available in the Enterprise edition.

Therefore, ignoring the majority of the Kerberos workings (I will discuss that in a later blog post), below the user has already connected to the server with the mongo shell and at first, switches to "$external" (not a database but more of an artifact), before authenticating using the Kerberos name (i.e. username plus REALM) -

> use $external
switched to db $external> db.auth( { mechanism : "GSSAPI" , user: "mongouser@REALM.10GEN.ME" } )
1

Let's try to add a user with the db.addUser() function without switching to the correct database (admin) -

>db.addUser( { "roles" : [ "readWriteAnyDatabase", "userAdminAnyDatabase", "dbAdminAnyDatabase", "clusterAdmin" ], "user" : "userA@REALM.10GEN.ME", "userSource" : "$external" } )
{
 "roles" : [
  "readWriteAnyDatabase",
  "dbAdminAnyDatabase",
  "clusterAdmin"
 ],
 "user" : "userA@REALM.10GEN.ME",
 "userSource" : "$external",
 "_id" : ObjectId("518386c0ae6f8072686272bb")
}
Wed Feb 27 12:17:37.912 couldn't add user: cannot insert into reserved $ collection src/mongo/shell/db.js:128

As expected (even with userAdminAnyDatabase permissions) it is not possible to add users to the $external artifact. Therefore, switching to admin and adding user "userA" -

realm:PRIMARY
> use admin
switched to db admin
realm:PRIMARY> db.addUser( { "roles" : [ "readWriteAnyDatabase", "dbAdminAnyDatabase", "clusterAdmin" ], "user" : "userA@REALM.10GEN.ME", "userSource" : "$external" } )
{
 "roles" : [
  "readWriteAnyDatabase",
  "dbAdminAnyDatabase",
  "clusterAdmin"
 ],
 "user" : "userA@REALM.10GEN.ME",
 "userSource" : "$external",
 "_id" : ObjectId("518386ccae6f8072686272bc")
}

Still within the admin database, we can see that the user "userA" exists and authenticates from an external source.

realm:PRIMARY> db.system.users.find()
{ "_id" : ObjectId("512e3fc63a749d1baf9cd1f7"), "roles" : [     "readWriteAnyDatabase",   "dbAdminAnyDatabase",     "clusterAdmin" ], "user" : "userA@REALM.10GEN.ME", "userSource" : "$external" }

Authori(s|z)ation in Sharded Cluster

Unsurprisingly this causes confusion with MongoDB users and actually, even within 10gen staff (you can't know every single element of a database, right :p ).

When you create users through mongos (as recommended), the users for the admin and config databases are stored on the config servers (in their respective system.users collections). The users for other databases are stored in the system.users collections for each database and these credentials are stored on the primary shard (for that sharded database). Every replica set should have a user in the replica set's admin database for performing administrative actions and this admin.system.users collection is obviously unsharded.

So here's a quick example (plagiarised heavily from Spencer -

Consider a sharded cluster with 3 shards:
  • shardA
  • shardB
  • shardC.
You have 3 databases:
  • dbA
  • dbB
  • dbC
where
  • shardA is the primary shard for dbA
  • shardB is dbB's primary shard
  • and shardC is dbC's primary shard
When adding users in a sharded infrastructure, it is highly recommended to do this through a mongos (remember that connecting to a shard directly can cause many issues, such as inconsistent configurations and data). Therefore, consider the scenario where you connect to a mongos and create 4 users -
  • 1 ("clusterAdminUser") in the admin database
  • and 1 in each of dbA, dbB and dbCuserA, userB and userC respectively.
If you connect to mongos, you will be able to authenticate as any of the 4 afore-mentioned users.

Now, let's say you make a direct connection to shardA.  You will see and be able to authenticate as userA on database dbA.  None of the other users will be visible, so you will not be able to authenticate as the clusterAdminUser, userB, or userC.
  • If you make a connection to shardA from a different machine, then you will not be authori(s|z)ed to perform any actions at all, unless you authenticate as userA, in which case you will be able to access dbA, but no other databases.
  • If you connect to shardA from a localhost connection, however, since that shard doesn't have any users in the admin database, localhost connections will be given full access - see localhost auth exception for more information on that.  As a result, any connection from localhost to any of the shards will be able to access any information on that shard, in any database.  To prevent this, you can do one of 2 things.
  • In 2.4, you can set the enableLocalhostAuthBypass parameter to false.  In both 2.4 and pre-2.4 versions, you can add an admin user directly to the shard.  It is important to note that once you add a user to the admin database, every connection must authenticate (in order to have access), even those from localhost connections.  That user can then be authenticated to so that an administrator making a direct connection to the shard can have authori(s|z)ation to that shard (remember that access to the shard can obviously also be controlled by a firewall, such as iptables).  Remember also that each shard's admin database users will be completely distinct from each other, and from the cluster's admin database users.

Some Things to Note

Multiple Sessions


To check who you are logged in as run

> db.runCommand({connectionStatus: 1})
which should return something like
{
 "authInfo" : {
  "authenticatedUsers" : [
   {
    "user" : "mongouser@REALM.10GEN.ME",
    "userSource" : "$external"
   }
  ]
 },
 "ok" : 1
}

It is worth emphasising that in MongoDB, authenticating as another user does not close the original user session but simply appends the new authentication. Therefore, consider two users "read" and "readWrite". User "read" is logged in under the foo database, with "read" permissions and user "readWrite" is logged in under the admin database with "readWriteAnyDatabase" permissions.

If you run the `connectionStatus` command, you can see under the field authenticatedUsers, there are two users listed -
> db.runCommand({connectionStatus: 1})

   "authInfo": {
     "authenticatedUsers": [{
       "user": "readWrite",
       "userSource": "admin"
     }, {
       "user": "read",
       "userSource": "foo"
        }]
   },
   "ok": 1
}

The MongoDB server does not replace the current user session with a new one, instead the new session is appended. If you have two users with different permissions for the same database, than the user that authenticates last overwrites the authenticated session of the first user. This enforces only one authenticated user per database. Say -

> use test
switched to db test 
> db.auth('readWrite', 'a')
1
> db.auth('read', 'a')
1
> db.runCommand({ connectionStatus: 1 })
{
      "authInfo": {
        "authenticatedUsers": [{
          "user": "readWrite",
          "userSource": "admin"
        }, {
          "user": "read",
          "userSource": "foo"
        }]
      },
      "ok": 1
}

Run the db.logout() command to logout, ensuring you are in the correct database.

To verify a successful logout, run the connectionStatus command again:

> db.runCommand({ connectionStatus: 1 })

{ "authInfo" : { "authenticatedUsers" : [ ] }, "ok" : 1 } 

rs.conf()


To run this command, you actually only need read permissions on the local database.

Stopping the Balancer


Unfortunately stopping/starting the balancer is not done by a command but by updating the config.settings collection in config database, i.e.

> sh.stopBalancer()

results in the following update
mongos> db.settings.update({"_id" : "balancer"}, {"$set" : {"stopped" : true }}, true)

As a result, the user must have readWrite permissions on that database. Therefore, for an administrator to control the various aspects of replication and sharding with a MongoDB cluster, the administrator user will require permissions of readWrite on the config database and clusterAdmin in the admin database.

listDatabases


In order to run "show dbs", i.e. to list the databases, you need to have clusterAdmin permissions. The example below, shows this as well as few other things.
The first step after connecting is to authenticate as user readWrite. The user can view admin.system.users but cannot modify it as userAdmin permissions are required.
> use admin
switched to db admin
> db.auth( "readWrite" , "a" )
1
> db.system.users.find()
error: { "$err" : "not authorized for query on admin.system.users", "code" : 16549 }
> db.addUser( { user :"userAdmin", pwd : "a", "roles" : [ "userAdminAnyDatabase" ] } )
{
 "user" : "userAdmin",
 "pwd" : "abe283ad980a8b483d3cc9925fe0b20f",
 "roles" : [
  "userAdminAnyDatabase"
 ],
 "_id" : ObjectId("51839706ac2cdde5df6ba08d")
}
Fri May  3 06:52:54.265 JavaScript execution failed: couldn't add user: not authorized for insert on admin.system.users at src/mongo/shell/db.js:L128
The user only has readWrite permissions and so the user is unable to list the databases on the MongoDB instance.
> db.runCommand( { "connectionStatus" : 1 } )
{
 "authInfo" : {
  "authenticatedUsers" : [
   {
    "user" : "readWrite",
    "userSource" : "admin"
   }
  ]
 },
 "ok" : 1
}
> show dbs
Fri May  3 06:39:10.174 JavaScript execution failed: listDatabases failed:{
 "note" : "not authorized for command: listDatabases on database admin",
 "ok" : 0,
 "errmsg" : "unauthorized"
} at src/mongo/shell/mongo.js:L46
Then authenticate as the super-user mongouser, who has clusterAdmin privileges (as shown a few steps before). As a result, the databases can now be listed.

> use $external
switched to db $external
> db.auth({ mechanism: "GSSAPI", user: "mongouser@REALM.10GEN.ME" })
1
mongos> show dbs
$SERVER (empty)
$external (empty)
accounts 0.125GB
admin 0.046875GB
config 0.046875GB
db1 0.0625GB
foo 0.0625GB
fred 0.0625GB
mark 0.0625GB
test 0.0625GB
test1 0.0625GB
test2 0.0625GB
twitter 0.25GB

FYI

Please note that these new role access controls are available in all versions and not restricted to the Enterprise editions.

Further Reading

  • My "Securing MongoDB" presentation from #MongoDBdays can be found here.
  • The official MongoDB documentation for "User privileges" can be found here. This documentation goes through each of the possible roles within MongoDB, the commands that those roles can execute and
  • The official MongoDB documentation for "Privilege Documents", which store user credentials and role information, can be found here
  • Some useful tutorials on administrating users etc can be found here.

This post went on longer that it should have, doh :( Hopefully, however, it explains how authori(s|z)ation now works and how it is much more extensive than in previous versions.

Please let me know if there are any errors (it was a little rushed) and I hope to do a post explaining Kerberos authentication within MongoDB before I leave 10gen.

Wednesday, 10 April 2013

MongoDB London 2013

I'm just back from a superb (or in "American" : super exciting) MongoDB London conference. I've been told there were over 500 attendees and as you can see below, the venue was sweet :)


I have uploaded my slides to SpeakerDeck, I'm no longer using Slideshare as a repository for my presentations (I just don't think it's as good visually or in terms of a usage experience).
Thankfully the two presentations went well and I received great feedback (hey @rozza, no yellow discs buddy - yellow discs indicated a sub-par presentation, blue = good and green = excellent :) ). More importantly, there was no "Black Screen of Death" on the MBP unlike at IrissCon!

Please have a look at the slides and if there are any questions or mistakes, please let me know.

Thanks to the whole 10gen team (especially the community folk) for organising such an awesome event and more importantly, thanks to all those who took the time to attend (especially those who flew in)!!!

Personally, I'm a huge Star Wars fan and I used some of the Lego Star Wars pictures from the Mike Stimpson website here. I have previously used these slides as part of a presentation on another open-source product, Security Onion and emailed to inform them of their use. As a token of gratitude, I purchased a book of Mike's images and I'm sure when I receive it, I'll want to purchase more (much to my wife's chagrin). If you're a Star Wars or Lego fan, I'd strongly encourage you to check out Mike Stimpson's work, it's awesome!!!!!

Sunday, 13 January 2013

Github Repo of Pcaps

I became a little tired manually downloading pcaps from the various, freely available resources on the net so I added a repo to my very basic github repo so that I could simply `git clone` from wherever I am - assuming decent bandwidth and Github don't get to pissed at storing pcaps - as it's only 1.1gb at present, I doubt they will (/me hoping :) ).

I'm not really sure what I plan to do with this repo, I guess I'll extend it and add more as I get around to it. My sole goal at the minute is to use the awesome functionality of Github to manage and modify the content, as well as making it easier for me to remember where the pcaps are ;-)

Tuesday, 8 January 2013

Separate MongoDB Syslog by Facility

In my last post, I showed how you can set up MongoDB v2.2 to syslog its logs off to a remote syslog server. As my `tcpdump` snippets show, the syslog messages hit the syslog server tagged as "user.info", which means that they're assigned to the "user" facility with a severity level of "info".

I've received a few questions regarding the possiblity of splitting out syslog messages by facility, however, as everything is currently sent to a "user.info" bucket, so-to-speak, this is not possibility. There is a current feature request for this capability and work will be done on this but if this is important for you, I'd strongly encourage you to vote for this feature.

In the meantime, however, (whilst not ideal) you can still do some host filtering with rsyslog as outlined here.

Tuesday, 1 January 2013

MongoDB Logging to Remote Syslog Server

As per the MongoDB 2.2 release notes, log output for MongoDB can now be redirected to a remote syslog server.

Here is an example configuration.

MongoDB Instance

MongoDB is started as follows (note the extra `syslog` switch):

 $ mongod --dbpath=/data/db/syslog --fork --syslog  

The local "/etc/syslog.conf" file (i.e. on the `mongod` instance) is configured to send everything to the syslog server (10.7.100.20):

@10.7.100.20:514

Syslog Server

I ran my Syslog server on Ubuntu 12.04. There are a tonne of links out there describing how to install syslog on Ubuntu - see here. The syslog "facilities" are configured in the server's `/etc/syslog.conf` file (I left this as default):

#################################################################################
#
# First some standard logfiles.  Log by facility.
#
auth,authpriv.*            /var/log/auth.log
*.*;auth,authpriv.none        -/var/log/syslog

#cron.*                /var/log/cron.log
daemon.*            -/var/log/daemon.log
kern.*                -/var/log/kern.log
lpr.*                -/var/log/lpr.log
mail.*                -/var/log/mail.log
user.*                -/var/log/user.log
.....
.....
*.=info;*.=notice;*.=warning;\
        auth,authpriv.none;\
        cron,daemon.none;\
        mail,news.none          -/var/log/messages
#################################################################################

We then need to enable the syslog server to accept remote syslog messages as follows:

more /etc/default/syslogd 
#################################################################################
#
# Top configuration file for syslogd
#
# Full documentation of possible arguments are found in the manpage
# syslogd(8).
#
# For remote UDP logging use SYSLOGD="-r"
#
SYSLOGD="-r"
#################################################################################

Using `tcpdump`, we can see the syslog messages arriving at the syslog server from the `mongod` instance:

#################################################################################
01:27:40.675624 IP 10.7.100.6.55318 > 10.7.100.20.514: SYSLOG user.info, length: 111

    0x0000:  4500 008b 335e 0000 4011 6adc 0a07 6406  E...3^..@.j...d.

    0x0010:  0a07 6414 d816 0202 0077 0a1e 3c31 343e  ..d......w..<14>
    0x0020:  4465 6320 3134 2031 313a 3537 3a31 3720  Dec.14.11:57:17.
    0x0030:  6d61 726b 2d6d 6270 2e6c 6f63 616c 206d  mark-mbp.local.m
    0x0040:  6f6e 676f 642e 3135 3030 315b 3433 3032  ongod.15001[4302
    0x0050:  5d3a 2046 7269 2044 6563 2031 3420 3131  ]:.Fri.Dec.14.11
    0x0060:  3a35 373a 3137 205b 696e 6974 616e 646c  :57:17.[initandl
    0x0070:  6973 7465 6e5d 2072 6563 6f76 6572 2063  isten].recover.c
    0x0080:  6c65 616e 696e 6720 7570 0a              leaning.up.
01:27:40.675703 IP 10.7.100.6.55318 > 10.7.100.20.514: SYSLOG user.info, length: 110
    0x0000:  4500 008a 0d49 0000 4011 90f2 0a07 6406  E....I..@.....d.
    0x0010:  0a07 6414 d816 0202 0076 df4a 3c31 343e  ..d......v.J<14>
    0x0020:  4465 6320 3134 2031 313a 3537 3a31 3720  Dec.14.11:57:17.
    0x0030:  6d61 726b 2d6d 6270 2e6c 6f63 616c 206d  mark-mbp.local.m
    0x0040:  6f6e 676f 642e 3135 3030 315b 3433 3032  ongod.15001[4302
    0x0050:  5d3a 2046 7269 2044 6563 2031 3420 3131  ]:.Fri.Dec.14.11
    0x0060:  3a35 373a 3137 205b 696e 6974 616e 646c  :57:17.[initandl
    0x0070:  6973 7465 6e5d 2072 656d 6f76 654a 6f75  isten].removeJou
    0x0080:  726e 616c 4669 6c65 730a                 rnalFiles.#################################################################################

The logs from the `mongod` instance will typically be located in `/var/logs/messages` on the syslog server:

#################################################################################
Nov 16 01:27:40 10.7.100.6 mark-mbp.local mongod.15001[4302]: Fri Dec 14 11:57:17 [initandlisten] recover create file /data/db/syslog/syslog.ns 16MB 
Nov 16 01:27:40 10.7.100.6 mark-mbp.local mongod.15001[4302]: Fri Dec 14 11:57:17 [initandlisten] recover create file /data/db/syslog/syslog.0 64MB 
Nov 16 01:27:40 10.7.100.6 mark-mbp.local mongod.15001[4302]: Fri Dec 14 11:57:17 [initandlisten] recover cleaning up 
Nov 16 01:27:40 10.7.100.6 mark-mbp.local mongod.15001[4302]: Fri Dec 14 11:57:17 [initandlisten] removeJournalFiles 
Nov 16 01:27:40 10.7.100.6 mark-mbp.local mongod.15001[4302]: Fri Dec 14 11:57:17 [initandlisten] recover done 
Nov 16 01:27:40 10.7.100.6 mark-mbp.local mongod.15001[4302]: Fri Dec 14 11:57:17 [websvr] admin web console waiting for connections on port 16001 
Nov 16 01:27:40 10.7.100.6 mark-mbp.local mongod.15001[4302]: Fri Dec 14 11:57:17 [initandlisten] waiting for connections on port 15001 
#################################################################################
So as you can see, it's quite simple to syslog your logs off MongoDB to a centralised syslog server. If you want to keep an eye on "logging" related MongoDB feature requests and bugs, check out this JIRA link.