The Recycle Bin

A repository of comments, code, and opinions.

Posts Tagged ‘Security

Public Wi-Fi Dangers with Android Phones

leave a comment »

Turns out there is a nasty vulnerability affecting pretty much every Android phone out there.  Since it involves connecting to public Wi-Fi networks it seems like a good follow post to one of my previous post.

Here’s how it goes: When you connect to a site like Facebook for the first time, you exchange your credentials with the site and in return the site generates a unique session ID.  After the credential exchange, all you need is your session ‘token’ to authenticate with the site.  This token is only valid for a period of time (configured by the site, some are 30 minutes, 2 weeks, some never expire).  This allows you revisit the site and remain logged in with out entering your credentials every time.  Android phones are set by default to automatically reconnect to Wi-Fi networks that they have already connected to.  Once connected, the apps on the phone automatically connect to their corresponding web service, either exchanging session tokens or real usernames and passwords.

Here is what an attacker can do:  If a person has connected their phone to a common public Wi-Fi hotspot, say Starbuck’s Wi-Fi, whose SSID just happens to be “Starbucks” then the next time their phone sees a network named “Starbucks” it will automatically connect to it.  All an attacker has to do is set up a malicious hotspot near a real one, name it the same thing, and wait for the phone to come into range of the network and automatically connect.  Once connected, they can grab session tokens and then they can operate on that site as that user, as well as eavesdrop on what is being sent to the service. 

Since most sites only use SSL for logging in, your user name and password for the service is protected.  However, there is a pretty good chance that the site does not use SSL for the rest of the session and simply sends the session token in plain text.  The problem is that Android will reuse the session tokens generated while the phone was connected to the previous trusted Wi-Fi network, mean that your session ID is free game for anyone to use.

Best way to mitigate this is to shut off the option to automatically connect to networks that the phone has already connected to.

Much more detail here: http://www.uni-ulm.de/en/in/mi/staff/koenings/catching-authtokens.html

Written by Nathan

May 19, 2011 at 10:39 am

Posted in Security

Tagged with , ,

Secure Mobile Browsing

leave a comment »

I’ve written, and rewritten this post about three times already, and I’m tired of thinking about it. I’m just going to write out whatever comes to mind on the subject and just post it as is.

This started as a conversation with wife about the safety of mobile banking. We were driving away from our credit union and I wanted to check to see if a transaction had posted yet, so I pulled out my smart phone and browsed to the credit union’s site and checked. She told me that she has reservations about browsing sensitive (banking, email, etc.) on a smart phone.

With this post I intend to explain what happens when you browse the mobile web and hopefully put to rest some fears.

Websites can be connected to in two different ways: unencrypted (HTTP) or encrypted (HTTPS). Most sites will use a hybrid approach and use HTTPS for the login or checkout page, and then HTTP for the rest of the site. Banks however, should use HTTPS 100% of the time. I blogged a while back about HTTPS, so I’m just going to paste what I wrote then here:

[With HTTPS] the goal is to establish a unique encryption session between your computer and the server, so that eavesdroppers aren’t able to steal your valuable information as it gets sent along the line. This is accomplished by using the Secure Socket Layer (SSL) protocol. SSL uses public-key cryptography to securely establish a session (symmetric) key that is used to protect the subsequent data. This is how it works:

Client (you and your browser) connects to a server over https:// (port 443)

Server sends you it’s public certificate – This certificate contains the server’s public key .

Client generates a random number, encrypts it with the server’s certificate and sends it – This number is the premaster key

Server takes the premaster key along with some other random numbers that were exchanged and generates the session key

Now that you and the server have agreed on the same key all the data sent from this point forward will be encrypted.

So we can now settle on the fact that if the site you are browsing to is using HTTPS, then no one can eavesdrop and steal any of your info. This is true whether you are browsing from your home computer, work computer, mobile phone connected to 3G, or mobile phone connected to an open Wi-Fi network.

So, why do some people say it is unsafe? There is an added risk of phishing attacks when you are browsing on a network you don’t trust (unsecured open Wi-Fi). Since all the traffic from your phone to the website will have to travel through this router that you don’t trust, the attacker could send you to a page that looks just like your bank’s page, at which you will log in, get some sort of error, and then be redirected to the real page. You will probably think that you typed your password wrong, try again, get through, and never think twice about it again.

So that is the only reason why you shouldn’t do any sensitive browsing on an open Wi-Fi network. All other networks are safe, provided the site uses HTTPS. If it doesn’t, it doesn’t matter if you trust the network or not, everyone can see everything you do.

Would love some questions if anything is unclear and I will do a follow-up.

Written by Nathan

May 11, 2011 at 10:50 am

Posted in Security

Tagged with ,

Holy anti-feature, batman

leave a comment »

I had the opportunity to attend at talk by Mark Russinovich, of Sysinternals fame, during last week’s Trustworthy Computing Conference.  The topic of the talk was about security boundaries in Windows, and more specifically, what is not a security boundary.  The talk was very interesting, and I don’t want to reveal too much here, but there was one part of it that stuck with me and has been bothering me for a little while now.  One of the technologies he addressed was Patchguard, or Kernel Patch Protection, which was introduced in 64-bit Vista and Server 2008.  Patchguard is intended to keep programs from patching, hooking, or otherwise tampering with the internals of the NT kernel.  It does this by periodically taking a checksum of some important structures in the kernel (SSDT, interrupt table, HAL tables, etc) and comparing the current value with the previous one.  Any discrepancy here will indicate that the kernel has been subverted.  If it notices any changes to these structures, it throws an exception which throws a blue screen error.  Sounds good, right?  Sounds like a great new security feature, no more rootkits!  Well, not really.  The truth is, KPP really does nothing to stop malicious code, and in fact, is pretty useless in doing so.  Mark revealed in his talk, that that was never the intention of KPP, but rather, it was conceived as a way to force legitimate developers to stop using these techniques in their own programs.  See, most anti-virus and security products will use some level of system hooking in order to get a good view of activity.  In fact, one of Mark’s very own tools, RegMon, hook’s the SSDT to watch registry activity.  He even wrote a publication about the technique!  The problem with kernel hooking is that it is entirely unsupported and significantly reduces stability.

So here’s what I don’t understand.  Microsoft has recognized that system hooking leads to instability.  They’ve decided that programmers aren’t good enough to extend a kernel function safely without throwing a blue screen exception, so now they’re not going to allow us to hook certain system structures (pfft, allow is a funny thought).  But, instead of actually fixing the gaping holes in their system, they’re going to simply watch for system hooking, and then guarantee that the system will crash, by causing the crash.  Oh yeh, and they are going to blame it on the developer.  It’s like a car company deciding that talking on your cell phone while driving is dangerous, so they’re going to create a system that detects if you’re on the phone and then drives the car off the road for you.  That will show you! 

I just don’t understand this anti-feature.  There are plenty of legitimate reasons for hook these system functions, and it can be done safely.  I know it can because Mark has done it, and I’ve done it.  If you don’t want developers to subvert the kernel, then provide a complete API that we can use to extend and monitor the system, and fix the problems with your system that allows someone to take write-protected virtual memory, map it to physical memory, strip all the restrictions off of it, and send it back patched.  Don’t just come behind perfectly valid code, throw an exception and blame it on us. 

Written by Nathan

June 13, 2008 at 2:35 am

Safari Carpet Bomb (Update)

leave a comment »

I love being right.  Remember the Safari carpet bomb I posted about back in April?  Remember how Apple said it wasn’t a “security concern” and I scolded them for it?  Well, now it’s got interesting.  Apparently there is a known flaw in Internet Explorer that allows a website to execute any program on the user’s desktop without their consent.  Normally, this flaw isn’t as much of a concern because all new executables downloaded (by anything but Safari) get marked with an alternate data stream tag that indicates that is from the Internet Zone.  Any time an application with this tag is opened, the user is prompted and the action must be explicitly allowed.  Now when we include Safari’s carpet bombing technique that downloads an exe without notification or ADS marking, then this IE flaw becomes a critical security concern.  This is a great example of what is called a blended threat.  Two seemingly innocuous bugs combine to create a gaping security hole.  The IE team was not concerned with their bug because there was no way to get an unmarked exe onto the desktop without the user knowing, and the Safari team wasn’t concerned with their’s because you couldn’t execute the exes that it downloaded automatically.

So yeh, here’s the MS Security Advisory

Written by Nathan

June 4, 2008 at 2:49 am

Posted in Apple, microsoft, Security

Tagged with , , ,

Vista’s Despised UAC Nails Rootkits

leave a comment »

PC World – Business Center: Vista’s Despised UAC Nails Rootkits, Tests Find

PCWorld has a story about test conducted AV-Test.org that was supposed to rate the most popular anti-virus products ability to detect rootkits.  For people that don’t know, a rootkit is a program that takes complete control of a system, and tries to hide itself deep within the operating system.  They are notoriously difficult to detect once they are installed.  The most interesting result from this test wasn’t necessarily the results about which product detected what, but the revelation that Vista’s security framework, specifically User Access Control (UAC) was really effective at preventing rootkit infection.  The test took 30 rootkits written for Windows XP and tested various anti-malware and anti-rootkit suites.  Some of them scored fairly well, but none were perfect.  Of the 30 XP rootkits, only 6 would actually run on Vista, and in order to get them to run UAC had to be disabled.  This means that UAC has significantly raised the bar of entry for rootkits on Windows.  This shouldn’t really come as a surprise to anyone familiarly with this area, but there seems to be a lot of loud mouths shouting that UAC is worthless and should be disabled.  I have an anecdote that tells a different story.

The last product that I worked on was essentially a rootkit.  It was a component of a broader intrusion detection system which needed real-time information about what was going on in the system.  We wrote a simple device driver that intercepted all events within the kernel and logged them out to a database.  This means that every file, registry key, key pressed, port opened, etc, was visible to this program and logged.  We originally wrote it to work on XP, and an application to install it as a service, which involved a couple of calls to the Service Controller to install it.  If the user was running with an Administrator account (which everyone in XP does) then the driver would be loaded completely invisibly.  That means that any program that you have ever installed could very easily be spying on everything you, or any other user on your machine does.  I say it could be “very easily” doing this, not because the code is particularly easy to write, but that the Internet is absolutely littered with rootkit code, especially the .cn domain.  A little while ago we decided to update our driver to work under Vista.  Since rootkits are essentially an extension of the operating system, they become very dependent on certain structures and features of an OS and tend to only work under that version.  So we had to change the code a little bit to get it to run, but for the most part, it was the same program.  The only real difference between the two version was that on Vista, even if the user is logged in as Administrator, the installation of the service would fail if it wasn’t elevated with a UAC prompt.  Privileges in Windows works with tokens; each user and group has a token, there is a system level administrator token, etc.  When a program starts, it is given the token of the user, and is run with what ever permissions that the user has.  So, users of the Administrator group in XP would pass along Administrator, or system level, permissions to any applications.  The difference between XP and Vista, is that when a user is in the Administrators group their token in Vista is not a complete system access token.  For an application to receive system level access, it must be spawned by a system level user group (SYSTEM, LOCAL SERVICE, etc) or being elevated by an administrator with a UAC prompt.  This prompt assures that the user behind the keyboard is aware that they are giving this application complete access to the system.  Sure, it can get a little annoying from time to time, but I’d rather have a prompt alerting me every so often as opposed to a rootkit silently being installed.

Written by Nathan

May 26, 2008 at 6:42 pm

Posted in Security, Vista

Tagged with , , , ,

Safari Carpet Bomb

with one comment

When you’re writing a web browser, every bug should be considered a security issue.  Even if the bug seems simple and inconsequential, chances are someone will try to exploit it to harm users.  Nitesh Dhanjani over at ONLamp has a post about three different bugs he has found in Apple’s Safari web browser.  Now, to be clear, I’m not deriding Apple for having bugs in Safari.  These types of programs are very complicated and never bug free.  What I find troubling is their response to the submission.  Nitesh says that he submitted all three bugs that he found to Apple, and they responded by saying that they don’t consider two of the bugs a security related issue at this time.  I must object loudly to this.  Here is the bug:

It is possible for a rogue website to litter the user’s Desktop (Windows) or Downloads directory (~/Downloads/ in OSX). This can happen because the Safari browser cannot be configured to obtain the user’s permission before it downloads a resource. Safari downloads the resource without the user’s consent and places it in a default location (unless changed).

That means that any website can download anything and the user isn’t even notified or asked.  How is this not a security issue?  A large amount of malware relies on getting an executable onto a machine, and then convincing a user to run on it.  How about dropping a worm named Safari.exe, or Word.exe onto someone’s desktop, and the next time they go to open it they infect their machine.  Nitesh demonstrates this bug by littering the users desktop with tons of unwanted files.  While this is annoying, it’s fairly pointless and obvious.  If you think like an attacker for a minute you can come up with more sneaky and nefarious ways to use this hole.  I can’t seem to understand why Apple’s security team doesn’t recognize this as a security concern.  I mean, it’s sort of their job to look at every bug and see how it can be exploited to cause harm.  Nitesh also wanted to congratulate the team on their communication:

Before I get to the details, I want to make it extremely clear that the Apple security team has been a pleasure to communicate with. I sent them a couple of emails asking for clarifications, and they responded quickly and courteously every time

That’s wonderful that they’re talkative, but shouldn’t it bother you that they are dangerously wrong?

Safari Carpet Bomb – O’Reilly ONLamp Blog

Written by Nathan

May 15, 2008 at 6:39 pm

Posted in Apple, Security

Tagged with ,

Unaccountable Authority

leave a comment »

I have a problem with certificate authorities.  I hate that most people have no idea what they are even though they deal with them every time they browse the web.  Show of hands, does anyone understand what these dialogs are talking about?

 error msdn

 accetped gmail

I’m going to venture a guess that not many people raised their hands.  So you’re all told to look for certain visual cues when browsing sensitive sites (banking, etc) but I’m sure no one ever told you what they mean or why they’re necessary.  I’m about to tell you why it is all utterly stupid.

SSL

This all pertains to sites which deal with sensitive information, like your bank’s website, or any log in screen.  The goal is to establish a unique encryption session between your computer and the server, so that eavesdroppers aren’t able to steal your valuable information as it gets sent along the line.  This is accomplished by using the Secure Socket Layer (SSL) protocol.  SSL uses public-key cryptography to securely establish a session (symmetric) key that is used to protect the subsequent data.  This is how it works:

  • Client (you and your browser) connects to a server over https:// (port 443)
  • Server sends you it’s public certificateThis certificate contains the server’s public key .
  • Client generates a random number, encrypts it with the server’s certificate and sends it This number is the premaster key
  • Server takes the premaster key along with some other random numbers that were exchanged and generates the session key

Now that you and the server have agreed on the same key all the data sent from this point forward will be encrypted.

So, some questions should come to mind:

Can’t someone eavesdrop on the key creation and thus obtain the session key?
    No.  The session key is made up of three random numbers hashed together, two of which will be available to an eavesdropper, and the third (the premaster key) will be encrypted with the server’s public key, so that only you and the server know what it is.

How can I trust the server’s certificate?
   Well, each certificate is signed by a certificate authority.

What’s a certificate authority?
   It’s a company that signs certificates.  You see, a website will generate a public/private key pair and then send out a Certificate Signing Request (CSR) out to a CA who will take the public key and attach a digital signature to it and return it to the site.  Now the website can distribute this signed certificate so it can’t be faked.  When a browser receives a certificate, it verifies that the certificate has been signed by one of it’s trusted CAs

So, where do I get a trusted CA certificate?
   Chances are, you already have them.  Your computer, web browser, and java VM  all ship with root trusted authority certificates in their respective certificate stores.

Wait, who are these CA’s again? 
   Here is a list that I found googling: Catsdeep FreeSSL, Comodo, Digicert, Digi-Sign, Digital Signature Trust Co., Ebizid, Enterprise SSL, GeoTrust, GlobalSign, LiteSSL, Network Solutions, Pink Roccade PKI, ProntoSSL , QualitySSL, Rapid SSL, Real digital certificates, Secure SSL, SimpleAuthority, SSL Certificate Management Site, SSL.com, Thawte Digital Certificates, The USERTRUST Network, Verisign, XRamp Security

That’s a pretty big list full of companies I’ve never heard of.  Why should I trust them?
   Well, they’re big companies, with a lot of money invested in this.  Plus, how can you not trust them, with names like those, they must be secure!

In all seriousness, that last question is exactly the problem I have with certificate authorities. We have absolutely no reason to trust them.  Worse than that fact though is that nobody understands just how much trust we are placing in these companies.  We are taught as users to not be bothered with all of the magic that is going on between the browser, the ca, and the server, and to just assume that if there is a lock on the corner of your screen than you are safe and everything is good.  This gives the CA a level of unaccountable authority because not only are we incapable of noticing any wrong doing on their part, we are completely ignorant of their existence!  It’s a wonder scenarios like this aren’t more prevalent:

http://www.microsoft.com/technet/security/bulletin/MS01-017.mspx

For those that don’t like to click on links, this is a security bulletin about Erroneous VeriSign-Issued Digital Certificates that attackers are using to sign invalid certificates.

The certificate authority is the main point of failure in the X509 and SSL system.  I can’t for the life of me understand why any person in the field of security could conclude that giving a single company that much authority over an entire protocol is a good idea.  They build these massively complicated, mathematically intense systems for protection, and then leave it open to a single entity for trust.

I wrote this post under that assumption that most users don’t know what a certificate authority is, or even vaguely what is happening during a secure connection.  I feel like this illustrates a failure in the security community, much more so than in the individual user.  We walk a fine line in the computer security field, constantly afraid that if we require the slightest bit of effort from a user than they are not going to use the technology.  That’s all understandable, but if you go so far as to completely remove them from the process you leave them incapable of protecting themselves and fill them with a false sense of security.  By not even being aware of the most essential component in SSL security, it is impossible for anyone to know what to do if there is a failure somewhere along the line.  If the connection gets attacked, the protocol will rightly fail and the user will be presented with a choice; proceed anyway, or stop.  How is the user supposed to make the correct decision here?

To illustrate this point, I want to see some comments.  Answer this question:  what do you do when you encounter a website with an invalid certificate?  Do you just click ok and view the site anyway?

Written by Nathan

May 1, 2008 at 6:34 pm

Posted in Security

Tagged with ,