The Recycle Bin

A repository of comments, code, and opinions.

Archive for the ‘Security’ Category

Google’s Malware Detection: Worthless at Best

with 2 comments

Dangerous at worst.

Google has rolled out a new feature where it attempts to inform people that they think they are browsing from malware infected PCs.  It’s a noble idea, and I don’t want to rag on them too much, but it is also a terrible idea.  Most of the time I would say something like ‘at least someone is trying.’  But I can’t with this one.  There so many problems with the idea in general, and their advice is so unbelievably wrong it borders on malicious.

How It Works

When you do a search on Google.com, Google will look at the IP address of the incoming web request, and compare it to a list of IP address that known malicious proxies use.  If it’s on that list, your results page will show a banner that says something like: “Your computer appears to be infected” and then link you to their wonderfully ignorant help page: http://www.google.com/support/websearch/bin/answer.py?answer=1182191

When you do a search on Google, your web browser is sending a request to Google’s servers.  When Google receives this request, one of the fields in it is your computers IP address – think of it as a return address on an envelope.  Malware is able to modify your machine so that every web request you make is routed to one of their “proxy” servers.  Once the request hits their server, they can read it, modify it, or do whatever, and then forward it on to the site you were trying to reach.  Now, when that site gets the request (Google in this case) the return address isn’t your computer, but instead the malware’s proxy.  Google will do the search, send the page to the proxy, and the proxy will send it back to you.  You likely have no idea that this proxy is siphoning your requests and watching everything you do.  I made a slideshow demonstrating the concept:

So what Google is attempting to do here is look at the IP address of the request and if it is from a known malicious proxy, include this malware warning banner at the top of the results.

What’s Wrong With it?

Look at the “Malware Response” part of the slides, see how the response also gets routed through the malware proxy?  That proxy is able to edit the page that Google sends back and is in complete control of the content.  They could replace the banner to say “You’re computer is infected with malware, click here to download antivirus software” and then link them to a fake anti-virus Trojan that could extort money.  Every link on this page could be modified to point to more Trojans: http://www.google.com/support/websearch/bin/answer.py?answer=8091.  How many users are going to think that this fake anti virus program is legitimate now that Google is pointing them there?   This just shows a complete lack of technical understand by Google, and advice from a source like that is always the worst type of advice.

What Should They Have Done?

Nothing.  There is absolutely nothing a remote website can do if your computer has been modified by malware.  A remote site cannot even accurately warn you you are infected because the proxy can just remove the warning before the page reaches your machine.  All of the advice given here to fix your machine once it is infected is as worthless as snake oil.  The only thing you can do to remove malware from an already infected machine is to reformat it and start over.  If you are lucky and have a backup that you trust, you could try reverting to that.

Please, please, please, please please do not click on any banner or link that tells you that your computer is infected.  This has been common advice to battle fake antivirus Trojans for a while now, but apparently Google wasn’t listening.

Written by Nathan

July 24, 2011 at 11:52 pm

Posted in Security, Web

Tagged with , , ,

Public Wi-Fi Dangers with Android Phones

leave a comment »

Turns out there is a nasty vulnerability affecting pretty much every Android phone out there.  Since it involves connecting to public Wi-Fi networks it seems like a good follow post to one of my previous post.

Here’s how it goes: When you connect to a site like Facebook for the first time, you exchange your credentials with the site and in return the site generates a unique session ID.  After the credential exchange, all you need is your session ‘token’ to authenticate with the site.  This token is only valid for a period of time (configured by the site, some are 30 minutes, 2 weeks, some never expire).  This allows you revisit the site and remain logged in with out entering your credentials every time.  Android phones are set by default to automatically reconnect to Wi-Fi networks that they have already connected to.  Once connected, the apps on the phone automatically connect to their corresponding web service, either exchanging session tokens or real usernames and passwords.

Here is what an attacker can do:  If a person has connected their phone to a common public Wi-Fi hotspot, say Starbuck’s Wi-Fi, whose SSID just happens to be “Starbucks” then the next time their phone sees a network named “Starbucks” it will automatically connect to it.  All an attacker has to do is set up a malicious hotspot near a real one, name it the same thing, and wait for the phone to come into range of the network and automatically connect.  Once connected, they can grab session tokens and then they can operate on that site as that user, as well as eavesdrop on what is being sent to the service. 

Since most sites only use SSL for logging in, your user name and password for the service is protected.  However, there is a pretty good chance that the site does not use SSL for the rest of the session and simply sends the session token in plain text.  The problem is that Android will reuse the session tokens generated while the phone was connected to the previous trusted Wi-Fi network, mean that your session ID is free game for anyone to use.

Best way to mitigate this is to shut off the option to automatically connect to networks that the phone has already connected to.

Much more detail here: http://www.uni-ulm.de/en/in/mi/staff/koenings/catching-authtokens.html

Written by Nathan

May 19, 2011 at 10:39 am

Posted in Security

Tagged with , ,

Secure Mobile Browsing

leave a comment »

I’ve written, and rewritten this post about three times already, and I’m tired of thinking about it. I’m just going to write out whatever comes to mind on the subject and just post it as is.

This started as a conversation with wife about the safety of mobile banking. We were driving away from our credit union and I wanted to check to see if a transaction had posted yet, so I pulled out my smart phone and browsed to the credit union’s site and checked. She told me that she has reservations about browsing sensitive (banking, email, etc.) on a smart phone.

With this post I intend to explain what happens when you browse the mobile web and hopefully put to rest some fears.

Websites can be connected to in two different ways: unencrypted (HTTP) or encrypted (HTTPS). Most sites will use a hybrid approach and use HTTPS for the login or checkout page, and then HTTP for the rest of the site. Banks however, should use HTTPS 100% of the time. I blogged a while back about HTTPS, so I’m just going to paste what I wrote then here:

[With HTTPS] the goal is to establish a unique encryption session between your computer and the server, so that eavesdroppers aren’t able to steal your valuable information as it gets sent along the line. This is accomplished by using the Secure Socket Layer (SSL) protocol. SSL uses public-key cryptography to securely establish a session (symmetric) key that is used to protect the subsequent data. This is how it works:

Client (you and your browser) connects to a server over https:// (port 443)

Server sends you it’s public certificate – This certificate contains the server’s public key .

Client generates a random number, encrypts it with the server’s certificate and sends it – This number is the premaster key

Server takes the premaster key along with some other random numbers that were exchanged and generates the session key

Now that you and the server have agreed on the same key all the data sent from this point forward will be encrypted.

So we can now settle on the fact that if the site you are browsing to is using HTTPS, then no one can eavesdrop and steal any of your info. This is true whether you are browsing from your home computer, work computer, mobile phone connected to 3G, or mobile phone connected to an open Wi-Fi network.

So, why do some people say it is unsafe? There is an added risk of phishing attacks when you are browsing on a network you don’t trust (unsecured open Wi-Fi). Since all the traffic from your phone to the website will have to travel through this router that you don’t trust, the attacker could send you to a page that looks just like your bank’s page, at which you will log in, get some sort of error, and then be redirected to the real page. You will probably think that you typed your password wrong, try again, get through, and never think twice about it again.

So that is the only reason why you shouldn’t do any sensitive browsing on an open Wi-Fi network. All other networks are safe, provided the site uses HTTPS. If it doesn’t, it doesn’t matter if you trust the network or not, everyone can see everything you do.

Would love some questions if anything is unclear and I will do a follow-up.

Written by Nathan

May 11, 2011 at 10:50 am

Posted in Security

Tagged with ,

Enhanced Mitigation Experience Toolkit

leave a comment »

EMET 2.0

I learned about this tool at Microsoft’s Bluehat 2010 conference this year.  A blurb from their website explains it as

“EMET provides users with the ability to deploy security mitigation technologies to arbitrary applications. This helps prevent vulnerabilities in those applications (especially line of business and 3rd party apps) from successfully being exploited. By deploying these mitigation technologies on legacy products, the tool can also help customers manage risk while they are in the process of transitioning over to modern, more secure products. In addition, it makes it easy for customers to test mitigations against any software and provide feedback on their experience to the vendor.”

EMET allows you to set certain security mitigation techniques into programs that are unable to write these techniques into the code.  You could have a legacy application that is no longer being developed, an executable that is currently being exploited that you would like to harden, or just any particularly risky program that you want to sure up.

Here is a screen shot:

image

  My current configuration:

image

I decided to pick all internet face application (Messenger, Mesh, IE) and a few of the more highly targeted programs (Adobe Reader, Office) and enable all of the security mitigations that EMET can provide.

It’s a neat little tool, and it works really well.  I have not noticed any performance impact.

Download: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=c6f0a6ee-05ac-4eb6-acd0-362559fd2f04&displayLang=en

Written by Nathan

November 13, 2010 at 1:30 am

Posted in Security

ActiveX Redux

with 3 comments

Google has decided to throw away years of progress in Web security, JIT interpreters, and general common sense and implemented a weak rehash of the ActiveX control.  Dubbed Native Client, this new plug-in architecture will allow websites to deliver raw x86 code to users in an attempt to “create richer and more dynamic browser-based applications.”

First of all, there is nothing browser-based about running native code.  The browser is simply the distribution medium for your native application.  Is Google admitting that AJAX and browser applications aren’t all they’re cracked up to be?

In all the hilarious irony however, we shouldn’t lose sight of how awfully bad this idea really is.  I mean, it’s just a terrible idea.  Pushing raw machine code down the pipes is not a reasonable solution to the problem.  We tried this with ActiveX – it’s been a mess.  Sure, they’ve put some thought to security – a thinly veiled ‘sandbox’ that statically analyzes the bytes for any “dangerous commands”  before it executes.  Yeah, I’m sure no one is going to find a way around that….

Really, it’s ideas like this that guarantee that anti-virus vendors will always have a job…

Written by Nathan

December 9, 2008 at 1:47 pm

Posted in Security

This Week in MalWare

leave a comment »

There have been several interesting malware stories in the news this week. 

Facebook

There’s a new worm circulating the social network called Koobface.  Basically, you get a message from on of your friends that says something along the lines of, “I found this video with you in it, check it out,” and then you click on the link and the website tells you to update your Flash player.  Of course, this isn’t an update to Flash, but rather a worm that steals passwords and account information.  McAfee’s Avert Labs has a good piece of advice that everyone should follow to stay safe on the web:

Do not follow any unexpected hyperlinks you receive over the Web, Email, or IM, even if they are received from someone you know.  It’s best to ask for confirmation from the sender; that they intentionally sent such a link.

On the other end of hyperlinks, it’s best to install software and updates from the source (such as adobe.com in this case) rather than trusting the content from a third-party website.

FireFOX

There’s also a new trojan that is specifically targeting the Firefox add-ons system and masquerading as a legitimate extension called Greasemonkey.  The trojan gets installed through traditional means (codecs, flash update, etc) and installs into the add-ons and extensions folder.  Once it’s been installed, it runs as a normal add on would, and watches what websites get visited.  It looks for about 100 different sites, ranging from gaming to banking sites.  Once it finds a site that it knows, it records the user name and password and reports back to the attacker.  The piece of malware takes advantage of what I believe is a major design flaw in Firefox; any extension has complete access to everything the browser gets.  This is a major reason way IT departments are reluctant to deploy FF on their networks.  You really need to be careful when you are installing extensions, and make sure that they from a trusted source.  If browsing a local intranet page with confidential information, it is a best practice to either use a clean FF with no add-ons, or Internet Explorer.

OSX

Thirdly, Apple found itself in the malware spotlight after someone found an old support article in which the company recommended running anti-virus software on OSX.  This sort opinion sort of flies in the face of their advertising campaign that says that OSX is immune to viruses due to it’s superior architecture.  The controversy really started to mount with Apple quietly pulled that article from their support database and did a complete 180 on the issue.  I think Apple is doing it’s users a disservice by acting so defiant about the issue.  Suggesting that it’s users protect themselves with a modern anti-virus client which does more than just protect against viruses (it helps mitigate damage from trojans, phishing scams, and general data loss) does not mean that their product is flawed, or somehow inferior, it just means they care about their users.  Throughout all of this debate though, people seemed to lose sight of the real reason why OSX users should have AV clients on their machines – they can still receive and forward  Windows viruses! So please, if you’re on a shared open network, be courteous, and run an AV client.

 

Here are some of the links

http://www.bitdefender.co.uk/NW900-uk–BitDefender-detects-novel-approach-to-stealing-web-passwords.html

http://blog.trendmicro.com/cyber-crimainals-target-firefox-users/

Written by Nathan

December 9, 2008 at 2:24 am

Posted in Security

Full Disclosure

leave a comment »

It’s been about a month since Microsoft released MS08-067 – which I posted about here.  Since the patch was released, malware writers have scrapped together a worm that is spreading through the internet, swelling the ranks of their already impressive botnet.  How does this happen?  Wasn’t the bug fixed?  Well, let’s take a look at what happened.  First, the hole in the system stayed hidden for years – no one knew it existed, not MS security, hackers, or the all-knowing Slashdotters.  This is not necessarily a bad thing – because a vulnerability that no one knows about isn’t really a vulnerability at the time, right?  It isn’t once the bug was discovered and disclosed that we started having a problem.  In this case, the turnaround was fairly quick.  MS reacted appropriately and released an out-of-band update and pushed out a lot of press about how imperative it is that people update their machines.  The problem is, not everybody is going to update their machine.  These people are exceptionally vulnerable right now because in sending out a patch, Microsoft not only told everyone about the bug, but practically sent exploit code to the bad guys.  You see, a patch is like the inverse of an exploit – and hackers can take these files and analyze them to figure out exactly what component of the system is vulnerable.  There is a time span of mere hours between a patch release and the first sighting of in-the-wild exploitation.

So, my question is, what are we supposed to do?  These bugs are going to exists.  All operating systems will have a bug like this.  Don’t believe the drivel that gets spouted about how Windows is architecturally inferior to all other systems, therefore it is the only one to have these problems.  Programmers and testers are human, they make mistakes.  The question here is, how do you deal with these bugs once their found.  Obviously, you have to disclose them.  People need to be made aware of the situation.  But once you tell people about the bug, and patch it, then some people are more at risk than they were before.  I guess you could take comfort in the fact that less people are at risk (those who patched) but maybe, the overall risk has increased – because now there is a worm spreading.  So, do we need a less informative way to disclose the information – just tell people to update without saying what the problem really is?  Well, that won’t work either, because really, all the important information is found in the binary patch that gets sent out from Windows Update.  Bad guys have a Windows box too, and they get the update.  OK – encrypt the patch?  Won’t work, they’ll just diff their machines – it might slow them down a couple of hours, but that’s about it.  So, I’m at a loss, I guess I have to accept the fact that patches will lead to full disclosure, which will lead to exploitation.  I guess we just have to hope that people update their systems when they’re asked to.  But, I hate this conclusion – most cause of that H word… maybe we can make updates work they do in video games land – force you to update before it will allow you to connect to the internet again.  Any ideas?

 MS Secuity Bulletin

Written by Nathan

December 2, 2008 at 11:55 am

Posted in Security