Saturday 30 August 2008

Malware spreading via Facebook messsages

This is a wrinkle that I have not seen before

The message that I saw was in this form:

Title = Somebody upload a ivdeo with youo n utube. you should see.

"OMG!!! :
hXXp://images.google.com/url?q=http://tinyurl.com/55dk2y" (LINK INTENTIONALLY BROKEN BY Mark Long)

The host in this case is a hacked travel agent in Canada. It is likely given the normal operationing practices of botmasters that there will be multiple websites hacked to host and redirect, typically via a shared vulnerability.

I would strongly advise great caution in following links of this form - they are using google as a redirect.

If you follow the link, you will see what looks like (but is not) a YouTube page and an instruction to "click here to upgrade your flash player". Of course, it downloads a fairly generic bot at this point. I have not yet had the chance to reverse engineer it to see what it does.

Having spoken to the person who sent the link, it seems that they are using social engineering rather than automating Facebook to send the links.

Hope that this helps someone

Mark Long, Digital Looking Glass

Friday 29 August 2008

Spies like us

I am a member of LinkedIn, the business networking site. You can add me with my old hotmail account if you like and have an interest in coding, debugging or security - and I guess that you wouldn't read this if you didn't. The address is MarkALong64 at hotmail dot com; There is no sense in feeding the dumber harvesters of addresses.

Anyway, someone posted a wonderful question. What would you do if you were a spy with the tools and skills of a hacker? Now, there is an interesting question. Let me consider… I am e-007, an agent licenced to hack.

A lot would depend on what my mission goals were. There are a number of things that can be done.

Let us imagine for a moment that I am told that this is a request from the police and they are looking to get a botnet shut down. Well, you could start killing the bots. Indeed, that is what every antivirus solution and the Microsoft malicious software remove tool do on a regular basis. It works pretty well and largely killed the Storm Botnet. However, a lot of people do not get the updates (which is the normal time when the MSRT runs) because they don’t trust it and some people run with no AV solution or a broken one. You can’t kill a botnet that way. However, all the modern bots have a feature which allows them to update themselves – malware often adopts features from legitimate software. What would happen if you seized control of the botnet and told all the bots to replace themselves with an executable which was basically notepad.exe? Well, that would kill the botnet rather efficiently. Of course, there are legal issues with that. You are installing software on end user systems without their consent. How bad this is very much depends on what the system is doing. If it is a home user, you have broken the law but you have done so for a good and moral reason. However, what happens if the system that you have fixed was controlling a machine delivering radiotherapy to cancer patients? Well, you probably have made it work better. There is an outside chance that the update will break the system in such a way that the X-ray machine will cause the patient to glow in the dark from an excessive dosage. So, your benign but illegal act could kill due to the good old law of unintended consequences.

What if the orders from M were to find out more about them? Well, in that case, finding a way to insert a keylogger onto their systems would seem like a good option. It would allow you access to at least half of email conversations or instant messenger session. Put in a filter driver that send disk access over the wire and you will get a whole bunch more and implicate more and more people. Of course, this is just fiction. The bundestrojaner is not that clever. Well, perhaps not quite. The specifications are not exact public.

How about if I were looking to serve my government’s political aims? Well, if that were the case, then I would look to use compromised systems to attack the infrastructure of the enemy. It seems that all the Russian controlled botnets are busily attacking systems owned by the Georgian government. Maybe they have a counterpart to e-007 somewhere in the Kremlin – or maybe the link is a little more direct than that. Again, the documentation is not a matter of public record.

However, what is the most common activity in every civil service in the world? Why, empire building, naturally. If I had a way of talking to a group of talented hackers, maybe I would be best off recruiting them. A one way ticket to a nice part of the country and some new papers showing them to be naturalised Poles or Hong Kong Chinese would be part of the package, I think.

Of course, a true cyber spy wouldn’t be e-007 but 0xE007. Somehow 57351 doesn’t have the same ring.

Are there such people? I have never seen anyone in a Tuxedo at SecWest or BlackHat but I doubt that everyone there is using their own names. If there are such people, they are probably playing a very subtle game indeed.

Oh, on an unrelated note, we have been giving the website a bit of a facelift. Feel free to let me know what you think at Mark.Long@DigitalLookingGlass.co.uk

Signing off

Mark Long, Digital Looking Glass Ltd

Mark

Wednesday 27 August 2008

News and views

Hello again

I had a phone call from my father yesterday who wanted me to write about a “computer from a bank that had been sold on eBay and was full of customer records”. Ok, that sounded interesting. Maybe someone used a sector editor or forensic tool to recover badly erased data. I could write about that. A little research led me to this story which was a little different but the origin of the (perhaps) less than accurate initial news reports: It appears from the news reports that came a little later than the report that my father referred to me that someone took and sold a bit of kit that was sitting in a nominally secure facility that apparently had been used in an environment where it reasonably contained customer data. It was a network storage device after all. So, what was the failure here? Well, it seems likely that a couple of things were wrong. Oh, I must stress that I don’t have any inside information here and I am just going by the statements from the companies involved.

If the data was not being retained for archival purposes then it should have been wiped before being allowed offsite. It doesn’t seem very likely that you would archive data that was so that is probably the first failure.

The second failure was that the data was unencrypted. Encrypted data can be a bit slower to access but archived data normally doesn’t need that rapid access. It might have been seen as an unnecessary step. Well, since you are reading this, I think that we can be pretty sure that events proved otherwise.

The third failure would seem to be that the owner of the kit that was rotated out of the bank should be archived any data that should be retained and then wiping the kit securely with a process that overwrites the data multiple times with random junk. That is pretty standard procedure and there are tools like WipeDrive, Unishred or a few others. Of course, if you really want to be 100% sure, there is another way:



Radical? Perhaps. However, a cheap SATA drive from a major manufacturer will cost about 70 pence ($1.30) per Gigabyte of storage. When you compare that to the possible loss… well, it doesn’t seem that expensive to me.

It also appears that the kit was removed from the owner’s site (not the bank) without the permission of the company and so physical security was probably the fourth failure. Sometime things will go wrong despite the best efforts of all those involved and sometimes… well, sometimes things just go wrong.

Another item much in the news has been an announcement from Microsoft that IE 8 will contain a feature that allows you to browse the web without the entries going into your history – they are calling it InPrivate Browsing. Much of the discussion of this feature has focussed on negative - "ZOMG, Microsoft are helping teh Paedophiles!!1!"

Well, what does this change really mean? Not a lot, to be honest.

You have always (1E1 to IE6) been able to delete your history and cookies but in IE7 under Vista, the deletion was more complete and the file was multiply overwritten making the forensics of limited use. However, downloaded images would still be there unless the cache was deleted and overwritten.

In IE8, you will have an option not to include this session in the history and not to accept cookies - which was always an option anyway but the two are linked here. This means that bad people like those who download indecent images or pirated mp3 files or whatever will have the option of setting a switch in settings rather than clicking a button after the end of the browser session. It doesn't make it easier to hide, it doesn't (and can't) erase server logs and doesn't remove forensic traces of downloaded content as far as I can see.

In other words, it does pretty much what the same feature in Safari does. Of course, Apple were held up as protecting the privacy of users rather than being in league with child abusers but one man's terrorist is another man's freedom fighter.

As for whether it is a good thing, that is for each user to decide... but once one browser did it, there was an option that allowed abuse. All enabling technologies seem open to such things. It seems most likely to be used to hide porn browsing habits from parents and spouses in my opinion.

Finally, I read an excellent writeup of the greyware XP Antivirus 2008 written by Jesper M Johansson for the register. It neatly shows how professional and organised the malware gangs are these days. Well worth a read of this fine analysis.

Signing off

Mark Long, Digital Looking Glass.co.uk

Tuesday 26 August 2008

Debug code

Ah, just a quick diversion before I talk about debug code and when you should have it and when you might not want it. Something interesting happened this past weekend that rather amused me and I don’t mean seeing WALL-E though that was a fine movie.

A student in Germany tried to hack Digital Looking Glass’s website using a rather uninspired directory traversal weakness. It might have worked if I was using the sort of host that he thought that I was and I was unpatched. I can’t claim to be surprised by the attack though. Hacking a security company’s website is a bit like smashing a sixer in conkers. Not today, my friend. Also, you might want to cover your tracks a bit better next time.

So, debug code. It used to be that this was one of the main debugging tools available and sometimes the only one. Trace statements displaying some information were peppered through the problematic area of the program and were included in the program’s output. We did this on mainframes, we did this on DOS and we didn’t do it quite as often in Windows. “printf” was replaced by “OutputDebugString” or “Debug.Print” and we still did business in much the same way though there were other options and it was a less popular choice. Often the information was limited to “Entering procX1” and “Leaving ProcX1”. The developer tools and some of the debuggers would display the debug text. There was a debug stream that was often piped to null – but easy enough for the tool to hook. Once programs were working, the debug statements were removed.

Well, mostly.

If you are debugging an application today, you will still see bits of debug spew in the debugger. If you are a fan of the SysInternals tools (uh, the Microsoft tools, I mean) then you may be familiar with DebugView which lets you capture the debug output. Try it on some published programs and there is a good chance that you will see debug information being pushed to a unregarded stream.

What are the good and bad points of leaving the debug code in place?

The good point is that you can get some idea of what is going wrong without reaching for the debugger and walking through some very complex structures in CDB or whatever your tool of choice happens to be.

The downsides are multiple. The first is that it bulks out the code and requires much more code to be in the working set. Given that processors are much faster than memory and loads faster than disk, this is not a good thing.

The second is that you are revealing information about the internals of your program that might be of use to someone reverse engineering the code. I have seen malware that still had trace statements.

Thirdly but not least, the trace statement may also display information that should be confidential. Imagine that you had an application that accessed medical records. The program knows that Patient John Q. Smith is HIV positive but no-one except his clinician should have access to this information – a subset is presented to other authorised users such as the person who books the patient appointments. If your debug statement shows the whole record then you have just revealed information to a user who had no right to that data. This is regarded as a very bad thing I am told.

All that said, debug code can be handy if you are sure that they don’t compromise you unacceptably as I mentioned when I was talking about object brokers last week.

If you are going to have debug code, you have a few options.

Just put it in there with no means to disable it without an edit. This is simple but inefficient.

Use conditional compilation to create a version with and without the debugging code. This isn’t a bad option. The downside is that you need to swap over the binary when you want to debug and that means restarting the application/service. If you do go this route, you probably want to use a debug switch other than DEBUG to enable the tracing because you want to alter the behaviour of the application as little as possible.

Have the debug code in there but skipped via a flag. This can work very well. Ideally, you would have something like a registry key that is checked every few minutes and use this to turn on or off the logging. Of course, it is best to have a Boolean flag which you set and check rather than reading the key each time since registry reads are not cheap. A Boolean will pipeline very nicely indeed.

Any debug output will slow the application down, of course. Does this matter? Probably but maybe not as much as you would imagine. If the application spends most of its life waiting on a database, it doesn’t much matter if you give it a little more work to do that has nothing to do with accessing the database. Debug statements which access the database are another thing though – not least because they will alter the timing enough for it to be a problem in many cases.

All in all, a technique which has some uses but not a solution to all things. Of course, nothing ever is.

Signing off,

Mark Long, Digital Looking Glass.

Friday 22 August 2008

Coding practices

Hello all

Bit of a change of pace today. I would like to talk about some common coding issues that I see when reviewing code. A lot of them are very natural things to do because of the way that a person thinks through a problem. These won’t be security related, at least not in the main.

Oh, and these points will be largely language agnostic though there are a few specifics. I will call them out when we get to them.

The biggest mistake that I see is doing unnecessary work. People have come to rely on the optimiser to generate code that doesn’t do unneeded work. Well, the optimiser is a good thing and micro-optimisations are generally best done by it – lets consider an easy example (Pseudo-code)

“If (Object.Method(Param1.Field, Param2(a))) && (a=TRUE) then”

Ok, testing if a Boolean is true or not is very, very easy and quick. The processor optimisation means that both options can be pipelined at once. Cool. The second test is much, much more expensive. A good compiler will want to avoid the call if possible so best to check the Boolean first. Oh, and is this always a safe optimisation? Well, no, not if there are useful side-effects of the called function but very few people write code that intentionally changes state in what is essentially supposed to a check of state. When they do, a lot more debugging is generally needed. Compilers can do this sort of optimisation very well indeed. They will also do a lot of other cool things – if you have a local floating point variable that you only ever use in the first part of a function and another in the second part that you never use after the first one, well, no sense having 2 variables at all from the compiler’s viewpoint. In practice, hand optimising this would probably be harmful since that would reduce the maintainability of the code.

However, the optimiser is not going to help if the algorithm is not right. Here is a simple example:

For (i=1;i<1000;i++)
{
o = new MyObj;
// some more code
if (o) then release o;
}

Now, the logic is clear to a person. I will need an object of type MyObj and I should clear it up as soon as I am done with it. That is just plain good programming practice. It is nice that we check that we actually have something to dispose of before we call release.

Do we need a new instance each time around the loop? In this case, almost certainly not since there is no constructor. We can probably just allocate an object once outside of the loop and dispose of it at the end so

o = new MyObj;
if (!o) then ThrowException(UnableTocreate);
For (i=1;i<1000;i++)
{
// some more code
}
release o;

Hmmm. More lines of code but much more efficient. What about of there was a complex constructor? If that can, we would need to create a new one each time, yes? Well, only if there were no other way of setting the fields. The less work that you make for the heap manager, the better. This is true for every language and every operating system. It will almost invariably be cheaper to reset fields in an object than have a brand new allocation and bring forward the time of the next decompression of the heap or garbage collection or what have you.

Possibly interesting story from my Microsoft days. A common complaint in the early days of .NET was that the Garbage collection was often taking more than 20% of the processes run time. The GC is actually pretty damned efficient but people saw this as an unacceptable overhead. In fact, their unmanaged apps were probably spending about the same time in memory allocation and reclamation or more but there was no perfmon counter for it so people thought of it as a free action. Making the numbers visible made people see the problem but they assumed that it was a new problem.

Object brokers help code efficiency a great deal. The down side is that time spent thinking about object brokers is time not spent thinking about the problem that the code is to perform – and convention wisdom seems to be that performance tuning is something that you do when you see where the problems are. I can see the merit in this argument but poor memory management will sap the life of the whole process and not generally show up as a hotspot because it is in the run time.

Any other good points for object brokers? Why yes, thanks for asking. There are several but the ones that I like best are:

1. You can keep track of the number of each type of object and track down leaked resources very quickly indeed. Memory usage bloats. You look at the brokers and see that there are 20,000 extant employee records in a company with 300 employees. I guess that someone isn’t returning them and there should only be a few places that do that operation.

2. Debugging – since you have access to them all, you can have a debug function dump them all to a file and get a snapshot of the state.

3. Need to change the algorithm for providing the objects? Just the one place to change.


Now, I spoke at some length on my old blog about exceptions but they are a very good thing and a very bad thing. The good thing about them is that they are a powerful and structured way of handling what has gone wrong with your processing. You throw and exception in the case of an error (raise an error if you are VB6 programmer – and thank you for keeping the faith) and your code merrily goes on its way, confident that we passed that check and the state is as it expected. Exceptions are great for signalling that something exceptional has happened and needs to be handled. When you see them used for anything else, then you have to consider whether this is a stunningly brilliant move or simply a really bad idea. I am still waiting to see the brilliant alternative use of exceptions but feel free to mail me if you have one. Just to recap then, an exception is the highest priority change of execution that you can have in user mode. The processor pipeline? Gone. The processor cache? Gone. Probability of page faults? Very high. To quote figures from Raymond Chen, a reference to memory in L1 cache takes about 2-3 times longer than an average instruction. Level 2 cache will take about the same as 6-10 instructions. Main memory is 25-50 instructions. If it is not in the working set and has to come off disk? That is 10,000,000 instructions. If your exception causes 10 page faults and 2 of them are not in working set, that will give you the same overhead as 20 million instructions. Did you really want that much overhead for a mechanism to tell you that you have reached the end of the string that you were parsing or some other routine thing like that? No, probably not. Of course, you don’t do that, gentle reader, but I bet you have less faith in Bob down the hall or the intern who wrote what turned out to be your best selling product.

Oh, and on the subject of exceptions, my biggest red flag (shown here in Pseudo.NET but it is common to all languages. VB6 users would call it On Error Resume Next)

Try
{
// some code
} Catch (…) {}

An empty catch block is a way of suppressing all exceptions. You didn’t get memory when you asked for it? Never mind and carry on. You didn’t save the file? Never mind, carry on. I have only once written code that intentionally ignored failures and that was used to emergency shut down a bit of high voltage equipment that needed stages shutdowns to avoid damage. A typical case where the emergency shutdown would be called was when the hardware was electrocuting someone. At that point, all considerations were secondary to stopping the power and if hardware cooked, hardware cooked. I would be very inclined to ask questions when I saw someone else doing the same.

So, that is all that I have time for today but my next entry will be on debug code, when and where to have it. Or possibly something different if there is breaking news in another area of interest.

By the way, questions are welcome. You can reach me at Contact@DigitalLookingGlass.co.uk and don’t be shy. You can have 2 hours free consultancy on debugging, code reviews or malware or ask me to address a point in my blog if you want. Feel free to disagree with anything that I say. Debate is good.

Signing off

Mark Long, Digital Looking Glass

Tuesday 19 August 2008

Bizarre Clipboard attack linking to “greyware” sites

There seems to be a interesting little wrinkle in the malware saga. A new malware has been detected which is overwriting the clipboard with a link to some bogus malware removal tool. It is not clear exactly how it is doing this but we can gather some information from the reports.

It overwrites the clipboard – there are not that many ways of doing that so a breakpoint on SetClipboardData would probably tell a lot about what is doing it. It seems to do it on a timer so looking at who was setting up timers would also be of use – and the WM_TIMER messages would tell you at least one of its windows.

It appears to be memory resident. Initial reports from victims say that restarting the machine stops the clipboard overwrite. This isn’t something so commonly seen these days but a lot of anti-virus products focus on checking file reads and writes and this may be an attempt to avoid detection.

There has been a lot of speculation that this is linked to the odd news SPAM that has been doing the rounds. Here is a sample:


“From: Top News Agency
Sent: Monday, August 18, 2008 9:47 PM
To:
Subject: Weekly top news


Richardson: I was a little 'uneasy' about a Clinton roll call

New Mexico Gov. Bill Richardson said he's now comfortable with Sen. Hillary Clinton placing her name in nomination at the Democratic convention, but he admitted he was uneasy about the move at first


Read All (43) breaking news [link omitted]
AND 24 shocking videos [link omitted]”

The links were to a firm of lawyers and the site had probably been hacked. It was a Linux machine running Apache.

The link that appears on the clipboard is for pretty standard bogus anti-malware product of the type that seems so common these days.

If anyone finds a machine which has the odd overwritten clipboard behaviour, a dump of kernel memory would be very revealing. I would like to look at that.

Until next time, signing off

Mark Long, Digital Looking Glass

Monday 18 August 2008

Hello again

I had a couple of questions from readers. I answered most of these on a 1:1 basis but there were a few common ones that I would like to address here.

The first question was “What could a successful SQL injection attack do?” Well, that is a good question and it very much depends on what your website holds. If it is a website containing pictures of your cats, then the worst that will happen to you is that you might have your website defaced. Your visitors might be attacked by maliciously uploaded HTML and scripts but that is unlikely to harm you – at least, not until you browse to your own site. Quite often it is the admin who checks the site. Not a good account to have compromised. What if it is not a website of your cats. Maybe your database is on the same machine as the internet information server or the Apache web server. That isn’t at all uncommon in small businesses. Windows Server Small Business Edition is designed to work that way. Ok, a successful SQL injection attack will give the attacker the ability to read any of the database records at least. If the rights are left at the default, then they can also write to the tables. If you have the xp_cmdshell enabled then they can rapidly install a remote admin tool and own the box. On small business server, the account used is normally an admin. An admin on a domain controller is a domain admin. You just lost your domain.

The second question that I got asked multiple times was where people learned these skills. Well, it is well known that hackers are exponents of open source – of course, so are many, many perfectly legitimate professionals and I have some open source software that I used daily. However, the hackers all love open source and they tend to be open with information too. Want to know how to hack? No charge! Just share the love at http://www.hackthissite.org/ and practice to your heart’s content.

So, what can you do to protect yourself? The good news is that there is quite a bit that you can do. The bad news is that a lot of this is risk reduction rather than risk elimination but anything that makes life harder for the bad guys is all to the good. There is more than this but these are very good starts.

1. Code review, at least of your web interfaces. So, imagine that you have a classic ASP app with a COM object called from the script running on an IIS server. The methods callable from the script have to be your main focus. Wherever you have user supplied text being appended into a SQL statement, there is a probable exploit. User supplied text needs to be validated before use. The validation rules are complex and hackers will look for a way around them.

2. Design with a compromise in mind. If your application were malicious, it should be able to do least harm. Does it need write access to all those tables? Probably not. A great many tables such as the web page content will never be modified by the web application. Why does the account that reads them have permission to write them? Limit accounts as much as possible.

3. The web facing systems should be in a domain with one way trust to the main domain if you have multiple domains – and for all but the smallest companies, the overhead is worth the security.

4. This one is a bit controversial. I know that banner adverts are a source of revenue. You can do some really good looking stuff with silverlight or flash and dynamic content. However, it is sometimes overkill and static HTML works and is safe every time.

I could talk about these things for hours and indeed, often have. However, that is it for today. If you want some advice, the contact number is on the website. The first 2 hours are free. Hard to beat that as a price point

Signing off

Mark Long, Digital Looking Glass

Friday 15 August 2008

SQL Injection attacks now on industrial scale

SQL Injection attacks are old school and well known. How well known? Well, check out popular web comic xkcd http://xkcd.com/327/

So, if they are that well known, there can’t be a problem with them any more because people will have protected themselves against them, yes? Uh, not so much. The pace is increasing but the patterns are changing. Let us look at an old school SQL injection attack and a currently popular one. Oh, I will be discusses some specifics of how they are done because the information is already widely known and it would be shutting the stable door after the horse has bolted if I skipped those.

Old School

A hacker injects some SQL into a text field on a website – or sometimes into the URL. You don’t see websites that pass the SQL as part of the URL any more but it was very obvious that they would be subject to abuse. There were a number of quick and dirty solutions to this. Some were better than others. One cheap technique was to put the whole site in a frame so that the user doesn’t get to see the URL. That works fine for an average user but when there are tools like the excellent Fiddler (http://www.fiddlertool.com/Fiddler/help/hookup.asp) about, that won’t help. A lot of sites use hidden text fields – these show up just fine as well. Anyway, there are a number of ways of spying on the HTTP traffic. Most of the time, this is necessary and you can just type the SQL directly into a text field on the form and that is what the old school script kiddie did.

They would then tag (deface) the web pages it they were doing it for bragging rights or if they were looking to steal, they would either write SQL to dump out tables full of valuable data or sometimes they would look for a helpful stored procedure to get them to a command shell. Once you had a command shell, a remote admin tool would be uploaded to the site and the hacker would have a nice high rights account to play with. Data theft was the most common motivation.

New School

In the classic old school approach, the hackers would find individual sites and pick away at them. It was craftsmanship in a way. Ok, a grotty and illegal way but craftsmanship all the same. When organised crime got in on the act, they didn’t like the slow, handpicked approach. They embraced new tools such as Goolag scanner (http://www.goolag.org/) There isn’t actually all that much to the Goolag scanner. All it does is perform custom searches on Google for vulnerable versions of software running on servers. You could do the same thing from a browser window but the scanner automates the process and saves a lot of time when looking for websites to hit. This tool was brought to the world by the Cult of the dead cow, a very well known group that did some seminal work on breaking the codes that protect nominally secure transactions on the web. Anyhow, the Goolag tool gives an automated way of finding sites to attack. Point it at a top level domain, tell it what sort of vulnerability to look for and let it run.

So, this gives the hacker a slowly growing list of websites that will be vulnerable. Only it isn’t “the hacker” any more. 16 year old script kiddies are the exception rather than the rule these days. There are a few lone wolves out there but more commonly, there will be a team of moderately skilled individuals working for a technical lead of some sort. They seem to mainly work from a set of written instructions and don’t show a great deal of variance form a standard procedure. That said, you do sometimes find a bright one and I suspect that is when their technical lead has become involved.
These commercial hackers want to find the vulnerable servers for a different reason to the average script kiddie. They want to change the content of your website to run exploits against your clients to install malware on their PCs. This generally works well if your customers are often unpatched – and in real life, that is the most normal case. What if they are patched? Your user knows your web site and trusts your company. If they can’t be exploited because they are well patched, they are still likely to install a component if your company’s website asks them to.

So, if you don’t protect against SQL injection attacks, are you putting your customers at risk? Yes, you surely are. Does that mean that your own servers are not at risk? Nope, far from it. There is nothing to stop the server being a host for malware while its data is harvested. They can get you coming and going. Why do they do it? Because it is their job.

Can you protect against this? Well, yes you can. I will be talking about how in my next blog entry.

By the way, when this particular xkcd comic came out, a lot of people sent it to me unbidden: http://www.xkcd.com/350/ I had to smile

Signing off

Mark Long, Digital Looking Glass

Wednesday 13 August 2008

Vista Security - a reaction to an article on NeoWin

“Vista’s Security Rendered Completely Useless by New Exploit” read the headline on the Neowin site. That sounds big and scary, doesn’t it? Here is a link to the article http://www.neowin.net/news/main/08/08/08/vista39s-security-rendered-completely-useless-by-new-exploit

Let us have a look at what this is all about. The reality is rather different from the claims here. Mark Dowd (IBM) and Alexander Sotirov (VMware) wrote a white paper and gave a presentation at Blackhat (a well known convention for people involved in security, penetration testing and, well, let us be honest here, hacking) on how to exloit a particular vulnerability in Vista. I wasn’t at Blackhat this year so didn’t see the presentation but I have read the white paper and it is well written and scholarly. The paper has the title “Bypassing Browser Memory Protections” and the subtitle “Setting back browser security by 10 years”. I don’t agree with everything that they say but I think that it was an interesting read.

What they show is that a particular known and fixed vulnerability can be exploited on unpatched systems using a combination of techniques in a particular case. This is impressive but a very, very long way from showing that Vista Security is entirely broken. There are some particular reasons why I say that this is very different and I will explain my reasoning. Oh, I have to say that I will not be using any specialist knowledge of Microsoft operating system code and did not work on the particular vulnerability discussed. I am just working from the white paper. Anyway, on to why I disagree.

The first reason is that the Vista security model holds up fine even on the exploited system. The exploit is in the browser. The browser runs web content in restricted mode. What does this mean? It means that the code can only do a very few things. Yes, it is arbitrary code of the attackers choosing so that is good for the attacker but let us review what can be done in restricted mode. It is documented fully here http://msdn.microsoft.com/en-us/library/bb250462(VS.85).aspx but let me save you some reading. Code running in protected mode can write to Temporary Internet Files or the Low folder of the user. It can also write to the HKEY_CURRENT_USER\Software\LowRegistry section of the registry and sent certain well defined Windows messages to other processes. These are messages believe to be very safe indeed. What does this mean in simple term? It means that you can display a message box but you can’t silently hook in a keylogger even if the user is an admin. Vista Security has limited the compromise rather nicely. Does that square with “”Vista’s Security Rendered Completely Useless by New Exploit”? No, not at all.

To come to the second reason, I would disagree that this is a a general case as the white paper claims. To be a little more specific:

1. In the exploited case, there was a overflow in a buffer that wasn’t being treated as a buffer. This was the ANI buffer overflow (CVE-2007-0038). This was patched by Microsoft in MS07-017 (K925902 to its friends). This was an odd case because there was no buffer as such used. A structure was overrun that hadn’t qualified for checking and where (according to the white paper) a user defined number of bytes could be written into a fixed length structure. The curious thing is that you would normally write code that expected to populate a fixed length structure to assume that the number of bytes to copy was the same as the length of the structure. This made it very different to the more usual buffer overrun cases that have been exploited before. There are additional safeguards (as Dowd and Sotirov discuss) that protect in the more usual case.

In their discussion of SEH chain validation, they say that you can bypass the validation by changing memory (in this case, a table of exception handler addresses) in some specific ways. This is a new feature in Vista and Server 2008 which normally stops attackers who have compromised the stack from overwriting the exception handler pointer. Anyway, they claim to have found a way of disabling this. To do that, you would need to be able to run custom code (in which case you have already won) or do a memory write where you controlled at least 2 registers and had a replacement SEH in executable memory. If you can load custom code and control multiple registers then you have already compromised the machine. I don’t see that they are adding much with this. This is pretty much special case stuff.

Their point about not all functions with buffers having stack cookies is a valid one but hardly news. As they point out, there was already a vulnerability known that exploited that. Unless they have some other similar unpatched vulnerability then this is an academic point only.

They make much of the ability to overwrite 4 bytes on the stack before the stack cookie is checked but with load addresses of modules randomised, that doesn’t help you very much. In the most common case, an exploit that allowed a stack write could let you change a value in the memory space of the process immediately before it was torn down for failing the stack cookie check. As they point out, heap exploits on Vista are pretty much impossible to use so stack overwrites are about all you have. I don’t see this as anything to get excited about…

…except if you overwrite the SEH list as they say. However, before this is of any use, you need to have injected some code for the SEH to point at and know where it is and then cause an exception that would be handled by that exception handler. So, for their compromise to be useful, the box already needs to have been controlled to the extent that you have have custom code loaded at a known location AND you have a buffer overrun AND you know where the SEH list is stored AND you need to know where in the list to write which is tricky because this is a linked list which you need to walk - which you can’t because all you have at this point is the ability to write 4 bytes of memory.

As they point out, Vista doesn’t use lookaside lists so heap unlinking attacks don’t work. Can we call this “Setting back browser security 10 years”? No, I don’t think we can in all fairness.

Yes, they are right to say that Java applets could be used for heap spraying but that is not in and of itself exploitable. Documents of all types can also be used. This is hardly new or operating system specific.

It also true that many third party applications are not ASLR aware but they are no more vulnerable than they were on any previous operating system. It also has very little to do with hacking the browser.

In short, it was an interesting read but the article as published could hardly be called an accurate representation of the facts to hand.

Until next time, Signing off

Mark Long, Digital Looking Glass Ltd.