There is a lot of chatter around at the moment about a security vulnerability in all versions of Internet Explorer. What seems to have happened is:
1. Someone found a remote code execution vulnerability exploitable from IE.
2. Someone packaged a malware to install via this vulnerability. At the moment, the reports say that it is stealing game passwords but hey, if the bad guy can run arbitrary code then it could do more than that. The malware is not being recognised by much at the moment and could change at any time. The malware seems to have an all numeric name and load into svchost.
3. Someone hacked a bunch of websites to include malicious content. In most or all cases, this was done using a SQL injection attack. It continues to amaze me that there are still sites vulnerable to this class of attack as a trivial code review can find that type of flaw.
So, the situation as I write is that all versions of IE are vulnerable to this form of attack but you probably could not get infected via an HTML email because scripting is disabled by default and because on Windows Server 2003, 2008 and Vista, the rights used for HTML displayed in the mail client or the browser are so reduced that the malware shouldn’t be able to hook itself in.
Now, Microsoft are calling it an IE vulnerability but the mitigation advice includes unregistering oledb32.dll which suggests that it isn’t IE that is at fault – it is just passing along information from a script and the underlying OS has an issue. Now, if that is the case then I would be willing to bet that this was exploitable from Office as well but there are no current reports of this. The advisory also says that the issue is with data binding. Since OLEDB is a COM DLL and there is no direct way of calling into a DLL from Jscript anyway, the exploit is going to look like a couple of data binds, sharing an object of some sort. There won’t be an external database, just some XML embedded in the HTML.
One of the mitigations that Microsoft are offering is to turn on DEP which means that this has to be an old school exploit involving a stack overrun so you shouldn’t expect to see a separate payload on the heap. The installation code should be right there in the XML.
So far, there is no clear pattern as to what sort of sites are hosting this. A Chinese motherboard manufacturer, some porn sites, a Taiwanese search engine and a couple of sites in HongKong, most of which are in Chinese. Spotting a pattern? The hackers can speak Mandarin. What is being stolen? World of Warcraft passwords among others. I would suspect that a gold farming operation has decided to expand.
Much is being made in the press about how open Microsoft have been about this vulnerability and some people have drawn the conclusion that this is an especially bad vulnerability. Hmmm, does that bear up to examination? Remote Code Execution vulnerabilities are fairly common in all browsers. MS08-052 patched an important one in GDIPLUS, a much patched component. MS07-055 was another, that time in the vector markup parser – and again, it needed repatching later that year because the same errors were found in other code in the same module. MS07-045? Some were patched there too. MS07-058 also resolved remote code execution vulnerabilities accessible via Internet Explorer. On a technical level, the only unusual thing is that this particular vulnerability doesn’t need a separate payload on the heap. This one is only unusually bad because there are exploits on the web for it.
Signing off
Mark Long, Digital Looking Glass Ltd
Tuesday, 16 December 2008
Saturday, 13 December 2008
Performing to expectations
There are good and bad points about running a small consultancy. I would like to focus on one of the good things though. If I can steal a quote from an old American Theatre manager, “Every day, the same thing. Variety!”
So, last week was largely involved in coding in good old VB6. This past week has been partially spent writing a guide on securing home PCs to protect children and bank details. However, I also did some work on how to troubleshoot performance issues for some people that didn’t want to hire outside talent for the work but needed the skills. That is OK with me. I always enjoy mentoring and teaching. I thought that it would be good to share the basics with a wider audience so I will blog about it here.
There are a couple of odd things about performance tuning. The first is that the law of diminishing returns tends to cut in long before you reach the theoretical limit. There comes a time when the cost vs benefit equation comes out against further change. The second is that it frustrates managers for reasons that will quickly become apparent.
So, the first step is to find the bottleneck. Are we memory bound or CPU bound or I/O bound – and with virtual memory, memory bound can add to I/O bound.
Memory bound applications are not quite what they used to be. When I was a kid, I had an Acorn Atom. In fact, I had the world’s fastest Acorn Atom since I had replaced the 1Mhz 6502 with a 2Mhz 6502A which I ran at 4Mhz using a bolt on heat sink (rare for processors in those days) and a 5V line running at 7.2 volts. That puppy used 2114L RAM chips each of which stored 1K bits. Put 8 of them on a bus and you have 8K bytes of memory. Each of those cost £24 at the time. I see that they are now available from specialist dealers for £1.40 but we are talking about 1980 money so we are talking £83 for 1K bit or £664 (about $992) for 8K of memory.
These days, you can get 1GB for less than £17 so the problem is normally not that there is not enough memory to back up the address space but that there is considerable contention for the memory. A prime candidate for this sort of problem is a server used for multiple purposes. Small Business Server has to be a domain controller and an IIS box and an Exchange Server and a SQL Server host. That is a lot for one box. Adding a memory hungry application is not going to help matters at all and most people don’t try. However, you often see IIS and SQL Server on the same box and both are big users of memory. While Server 2008 has made some improvements in this area and 64 bit servers are more common, there are still a lot of applications that hit problems. The key is looking at the page faults per second. The number will vary depending on the application but if they look too high then you probably need to tune the memory and give yourself some head room if such a thing is possible within the address space restrictions. The ASKPERF blog discusses this in much more detail. Oh, and overworked .NET apps tend to use a LOT of memory because the garbage collection get starved. Always looks at workload first with them.
CPU bound processes are perhaps more interesting. As always, Perfmon is your friend and you can get a lot of information from looking at thread activity and percentage of time in kernel mode. However, please be aware of something very important. These figures will be best estimates. They can’t be taken as gospel. Apps that thrash the CPU fall into two camps. Those that really are that CPU intensive and those that are doing unnecessary work. Calculating Pi to a million places is CPU intensive. Cracking codes is CPU intensive. If you are serving web pages or doing database updates or something which isn’t number crunching, then it shouldn’t be that CPU intensive. You need to discover where the CPU is being wasted. Heap management is a classic. If you fragment the heap badly by using sloppy memory allocation and deallocation, well, the heap manager will spend a lot of time cleaning up. Consider object brokers as they are often the answers. Do you have too many threads? For CPU intensive tasks, you should have fewer threads than for I/O bound tasks. If we are talking about a database server that waits for the DB to return records which are then processed then 50 threads per CPU might well be perfectly healthy. If you are crunching through large arrays then 5 threads per CPU might be too many. Please remember that thread switching is not free. Oh, and if your process is spending too much time in Kernel mode then you might want to consider what drivers you have and what you are asking the system to do. Finally, you might have to hand tune code to make it more efficient. I discussed this back in 2005.
I/O bound processes spend most of their lives waiting. Typically CPU utilisation will be low. There are really 2 approaches here. The first is to speed up the I/O operation. Disk transfer times vary between 45MB/s to 3GB/s and seek times vary from 2ms per seek to up to 15ms per seek. Faster hardware can make a big difference, especially if the hard drive has a decent cache buffer or if you can cache in software. Faster network links can help too. The other approach is to minimise I/O by careful caching of data. A small read only table may as well be held in memory. There is no need to pull back more fields from a database than you will use. You could even look at offloading reading and writing to another process in some cases. Typically, you need to consider more than one of these options.
So, why does this frustrate managers? Well, because there is no clearly defined end to this process, there is no specific end date by which you will have results. Try putting that on a Gantt chart! The other reason is that progress is very non-linear. You find a bottleneck and fix it. You immediately hit a second bottleneck. You fix it. If you have chosen well, initial progress is rapid. Because of the law of diminishing returns, you will make less dramatic improvements over time. The manager gets to see less and less success over each iteration. To many people, that seems like you are getting worse at what you do so that is one to message carefully.
I hope that this helps someone
Signing off,
Mark Long, Digital Looking Glass Ltd
So, last week was largely involved in coding in good old VB6. This past week has been partially spent writing a guide on securing home PCs to protect children and bank details. However, I also did some work on how to troubleshoot performance issues for some people that didn’t want to hire outside talent for the work but needed the skills. That is OK with me. I always enjoy mentoring and teaching. I thought that it would be good to share the basics with a wider audience so I will blog about it here.
There are a couple of odd things about performance tuning. The first is that the law of diminishing returns tends to cut in long before you reach the theoretical limit. There comes a time when the cost vs benefit equation comes out against further change. The second is that it frustrates managers for reasons that will quickly become apparent.
So, the first step is to find the bottleneck. Are we memory bound or CPU bound or I/O bound – and with virtual memory, memory bound can add to I/O bound.
Memory bound applications are not quite what they used to be. When I was a kid, I had an Acorn Atom. In fact, I had the world’s fastest Acorn Atom since I had replaced the 1Mhz 6502 with a 2Mhz 6502A which I ran at 4Mhz using a bolt on heat sink (rare for processors in those days) and a 5V line running at 7.2 volts. That puppy used 2114L RAM chips each of which stored 1K bits. Put 8 of them on a bus and you have 8K bytes of memory. Each of those cost £24 at the time. I see that they are now available from specialist dealers for £1.40 but we are talking about 1980 money so we are talking £83 for 1K bit or £664 (about $992) for 8K of memory.
These days, you can get 1GB for less than £17 so the problem is normally not that there is not enough memory to back up the address space but that there is considerable contention for the memory. A prime candidate for this sort of problem is a server used for multiple purposes. Small Business Server has to be a domain controller and an IIS box and an Exchange Server and a SQL Server host. That is a lot for one box. Adding a memory hungry application is not going to help matters at all and most people don’t try. However, you often see IIS and SQL Server on the same box and both are big users of memory. While Server 2008 has made some improvements in this area and 64 bit servers are more common, there are still a lot of applications that hit problems. The key is looking at the page faults per second. The number will vary depending on the application but if they look too high then you probably need to tune the memory and give yourself some head room if such a thing is possible within the address space restrictions. The ASKPERF blog discusses this in much more detail. Oh, and overworked .NET apps tend to use a LOT of memory because the garbage collection get starved. Always looks at workload first with them.
CPU bound processes are perhaps more interesting. As always, Perfmon is your friend and you can get a lot of information from looking at thread activity and percentage of time in kernel mode. However, please be aware of something very important. These figures will be best estimates. They can’t be taken as gospel. Apps that thrash the CPU fall into two camps. Those that really are that CPU intensive and those that are doing unnecessary work. Calculating Pi to a million places is CPU intensive. Cracking codes is CPU intensive. If you are serving web pages or doing database updates or something which isn’t number crunching, then it shouldn’t be that CPU intensive. You need to discover where the CPU is being wasted. Heap management is a classic. If you fragment the heap badly by using sloppy memory allocation and deallocation, well, the heap manager will spend a lot of time cleaning up. Consider object brokers as they are often the answers. Do you have too many threads? For CPU intensive tasks, you should have fewer threads than for I/O bound tasks. If we are talking about a database server that waits for the DB to return records which are then processed then 50 threads per CPU might well be perfectly healthy. If you are crunching through large arrays then 5 threads per CPU might be too many. Please remember that thread switching is not free. Oh, and if your process is spending too much time in Kernel mode then you might want to consider what drivers you have and what you are asking the system to do. Finally, you might have to hand tune code to make it more efficient. I discussed this back in 2005.
I/O bound processes spend most of their lives waiting. Typically CPU utilisation will be low. There are really 2 approaches here. The first is to speed up the I/O operation. Disk transfer times vary between 45MB/s to 3GB/s and seek times vary from 2ms per seek to up to 15ms per seek. Faster hardware can make a big difference, especially if the hard drive has a decent cache buffer or if you can cache in software. Faster network links can help too. The other approach is to minimise I/O by careful caching of data. A small read only table may as well be held in memory. There is no need to pull back more fields from a database than you will use. You could even look at offloading reading and writing to another process in some cases. Typically, you need to consider more than one of these options.
So, why does this frustrate managers? Well, because there is no clearly defined end to this process, there is no specific end date by which you will have results. Try putting that on a Gantt chart! The other reason is that progress is very non-linear. You find a bottleneck and fix it. You immediately hit a second bottleneck. You fix it. If you have chosen well, initial progress is rapid. Because of the law of diminishing returns, you will make less dramatic improvements over time. The manager gets to see less and less success over each iteration. To many people, that seems like you are getting worse at what you do so that is one to message carefully.
I hope that this helps someone
Signing off,
Mark Long, Digital Looking Glass Ltd
Wednesday, 10 December 2008
Are two better than one? Not always, IMHO
Although selling advice is what I now do for a living, I try to help out on the newsgroups as much as I can. I am a firm believer that you have to give something back as well as taking. I am no doctor or spiritual leader. I am a technical type. I give technical information.
One question that I answered on a newsgroup involved a very routine malware infection and there was a free anti-malware product that would remove it to a reasonable level of certainty. I recommended uninstalling the previously installed anti-malware solution first. Some people contacted me to say that they didn’t agree with that advice. Well, that is fine. Disagreement can be good. However, I disagreed with their reasoning. They argued that 2 anti-malware products would offer better protection. At most, one should be turned off during the scan, they suggested.
The reason that I recommended uninstalling as opposed to “turning off” the existing checker was that anti-malware programs typically work by inserting redirects into a thing called the kiServiceTable in the interface between the user mode functions OR by subverting the function starts in the kernel functions reached from the kiServiceTable. They do this so that they can monitor the system activity by monitoring the requests made. This is a good technique but there is no safe way to reverse it since there is no built in synchronisation that allows you to pause all kernel operations while you effectively rewrite the kernel. Accordingly, turning off a malware checker doesn't always unhook it from the system. It just causes it to ignore whatever it sees. So, disabling an AV product is not the same as removing it.
Now, anti-malware products work by subverting the system, by getting inside the internal functionality of it and modifying its behaviour. Ok, this is good and proper and done for the good of the user, more or less with his or her consent. However, malware does the same thing for malicious reason without the user’s informed consent. Her we have a competition. Everyone wants to be the first to subvert the system – as the saying goes, he who hooks lowest wins. When you are at the same level, the first is effectively the lowest level hook because it can control what happens after this point. If an anti-malware program finds that there are already hooks in place that subvert the system, what will it do? Well, it might set up a chain were one checker is called after the other in which case things work but it is a bit slow. That can happen accidentally if they use different hooking strategies. Alternatively, the second program to run might override some of the redirection and consider the other anti-malware as possibly hostile. You could and sometimes do end up with some system calls monitored by one program and others monitored by a second program.
So, what actually happens when you have 2 anti-malware programs trying to do the same job? No-one knows. It varies according to what decisions the programmers made and what order they start. Was that combination tested? It seems unlikely. If the products were tested together, were these versions tested together? Almost certainly not. It is normally considered “an unsupported scenario” which is code for “We don’t know what will happen or we expect it to break and don’t care”.
Are you much safer with two, assuming that they work? Not so much. Virus signatures are shared (using the Virus Information Alliance), anti-malware checkers with up to date signatures typically detect pretty much the same subset of malware as each other and fail to detect pretty much the same subset. Accordingly, the gain from running two is marginal at best even if they do play nicely together and that is uncertain at best. Of course, if one of the programs were much weaker than average then the second could help but why would you be running a lame antivirus in the first place?
I don’t know of any cut and dried research on this though. As stands, it is just my professional opinion. So much of our work against malware is at the limits of knowledge because each week, there are new variants and new exploits. Several times each day, vendors release new signatures. The industry is running as hard as it can to keep up and frankly, it is losing. Infections are up 100%. Spam is up more than 90%. In such shifting sands, a best guess is often all that you have.
We live in interesting times and the road promises to get bumpier before it smooths out
Signing off,
Mark Long, Digital Looking Glass
One question that I answered on a newsgroup involved a very routine malware infection and there was a free anti-malware product that would remove it to a reasonable level of certainty. I recommended uninstalling the previously installed anti-malware solution first. Some people contacted me to say that they didn’t agree with that advice. Well, that is fine. Disagreement can be good. However, I disagreed with their reasoning. They argued that 2 anti-malware products would offer better protection. At most, one should be turned off during the scan, they suggested.
The reason that I recommended uninstalling as opposed to “turning off” the existing checker was that anti-malware programs typically work by inserting redirects into a thing called the kiServiceTable in the interface between the user mode functions OR by subverting the function starts in the kernel functions reached from the kiServiceTable. They do this so that they can monitor the system activity by monitoring the requests made. This is a good technique but there is no safe way to reverse it since there is no built in synchronisation that allows you to pause all kernel operations while you effectively rewrite the kernel. Accordingly, turning off a malware checker doesn't always unhook it from the system. It just causes it to ignore whatever it sees. So, disabling an AV product is not the same as removing it.
Now, anti-malware products work by subverting the system, by getting inside the internal functionality of it and modifying its behaviour. Ok, this is good and proper and done for the good of the user, more or less with his or her consent. However, malware does the same thing for malicious reason without the user’s informed consent. Her we have a competition. Everyone wants to be the first to subvert the system – as the saying goes, he who hooks lowest wins. When you are at the same level, the first is effectively the lowest level hook because it can control what happens after this point. If an anti-malware program finds that there are already hooks in place that subvert the system, what will it do? Well, it might set up a chain were one checker is called after the other in which case things work but it is a bit slow. That can happen accidentally if they use different hooking strategies. Alternatively, the second program to run might override some of the redirection and consider the other anti-malware as possibly hostile. You could and sometimes do end up with some system calls monitored by one program and others monitored by a second program.
So, what actually happens when you have 2 anti-malware programs trying to do the same job? No-one knows. It varies according to what decisions the programmers made and what order they start. Was that combination tested? It seems unlikely. If the products were tested together, were these versions tested together? Almost certainly not. It is normally considered “an unsupported scenario” which is code for “We don’t know what will happen or we expect it to break and don’t care”.
Are you much safer with two, assuming that they work? Not so much. Virus signatures are shared (using the Virus Information Alliance), anti-malware checkers with up to date signatures typically detect pretty much the same subset of malware as each other and fail to detect pretty much the same subset. Accordingly, the gain from running two is marginal at best even if they do play nicely together and that is uncertain at best. Of course, if one of the programs were much weaker than average then the second could help but why would you be running a lame antivirus in the first place?
I don’t know of any cut and dried research on this though. As stands, it is just my professional opinion. So much of our work against malware is at the limits of knowledge because each week, there are new variants and new exploits. Several times each day, vendors release new signatures. The industry is running as hard as it can to keep up and frankly, it is losing. Infections are up 100%. Spam is up more than 90%. In such shifting sands, a best guess is often all that you have.
We live in interesting times and the road promises to get bumpier before it smooths out
Signing off,
Mark Long, Digital Looking Glass
Wednesday, 3 December 2008
Bugs, threats and seasonal events.
As I write, I am still warming up after a very unsuccessful attempt to get to London by train. An hour and a half waiting on a station platform gives plenty of time for thought but my fingers were soon too numb to use my PDA.
In a break from tradition, I am going to name and shame someone responsible for a bug that I recently was involved in fixing. This was one of mine and was interesting because it was rather subtle. It was in some VB6 code that I wrote the other day and was of the form
If Len(txtSomething) And Len(txtSomethingElse) Then
   cmdOK.Enable = True
Else
   cmdOK.Enable = False
End if
So, the idea was that a button is only enabled if there is text in both fields. I am a big fan of not letting people make errors in the first place if possible. I had thought (correctly) that len(whatever) would give 0 (false) or something else (true). The code worked most of the time. It took me a second or two to work out why. Compilers use a lot of state machines. In this case, the state that the parser was in when it got to this code was that it was expecting a boolean. What I had given it was a pair of integers. It would have interpreted one as a case for coercing the type and handling the integer (the result of the len function) as a boolean. Was there any way of making “integer and integer” into a boolean? Why yes, there was. VB doesn’t make a distinction betweens logical and boolean And. They use the same keyword unlike C which uses && and & respectively. Now,maybe this was a good decision and maybe it wasn’t but it was one that I should have remembered. As written, the code was ambiguous and the parser went for the simpler option. 12 & 8 == 8 is non-zero so the control was enabled. 8 & 4 == 0 so it was disabled. A less ambigous bit of coding would have been
cmdOK.Enable = len(txtSomething) * len(txtSomethingElse)
but I couldn’t bring myself to write such unintuitive code and a multiplication for a boolean operation seems wasteful although it would have made no actual difference in this case. The best coding would have been
cmdOK.Enable = (len(txtSomething)=0) And (len(txtSomethingElse)=0)
As for threats, it seems that that SRIZBI is back on the air. The bot and the bot master had a trick up their sleeves that the security community had not expected. If the bot is unable to contact its command and control channel, it generates a url mathematically and refers to it for instructions. The bot masters had the URL ready and most of the botnet was picked up again on schedule. I have to applaud our Russian friends for that. Fortunately, it is relatively simple to simulate the loss of a command and control system in the lab so we can anticipate where they will go to next time. I still think that a peer to peer system like Storm used is the way to go in the long term. Oh, and a big hello to my readers at the Washington Post. You heard it here first.
In other news, Apple are now recommending Mac users to install some kind of anti-virus product. Previously, their recommendation was that the threat was insufficient to warrant the potential downside of having an AV solution. The world is getting more dangerous, folks.
Oh, and there seems to be a lot of buzz about an enterprise information security package that contains rootkit like technology in a Chinese written module. Some of the AV vendors are detecting it as malicious. Well, it could be but it is hard to know. Increasingly we see security tools that resemble malware more closely as they try to hide from each other. The malware wants to disable the AV product and the AV product wants to disable the malware. It sounds like the new rootkit uses function redirection so the old Rootkit Unhooker tool should detect it.
Well, back to coding. You have to love feature creep.
Signing off
Mark Long. Digital Looking Glass
In a break from tradition, I am going to name and shame someone responsible for a bug that I recently was involved in fixing. This was one of mine and was interesting because it was rather subtle. It was in some VB6 code that I wrote the other day and was of the form
If Len(txtSomething) And Len(txtSomethingElse) Then
   cmdOK.Enable = True
Else
   cmdOK.Enable = False
End if
So, the idea was that a button is only enabled if there is text in both fields. I am a big fan of not letting people make errors in the first place if possible. I had thought (correctly) that len(whatever) would give 0 (false) or something else (true). The code worked most of the time. It took me a second or two to work out why. Compilers use a lot of state machines. In this case, the state that the parser was in when it got to this code was that it was expecting a boolean. What I had given it was a pair of integers. It would have interpreted one as a case for coercing the type and handling the integer (the result of the len function) as a boolean. Was there any way of making “integer and integer” into a boolean? Why yes, there was. VB doesn’t make a distinction betweens logical and boolean And. They use the same keyword unlike C which uses && and & respectively. Now,maybe this was a good decision and maybe it wasn’t but it was one that I should have remembered. As written, the code was ambiguous and the parser went for the simpler option. 12 & 8 == 8 is non-zero so the control was enabled. 8 & 4 == 0 so it was disabled. A less ambigous bit of coding would have been
cmdOK.Enable = len(txtSomething) * len(txtSomethingElse)
but I couldn’t bring myself to write such unintuitive code and a multiplication for a boolean operation seems wasteful although it would have made no actual difference in this case. The best coding would have been
cmdOK.Enable = (len(txtSomething)=0) And (len(txtSomethingElse)=0)
As for threats, it seems that that SRIZBI is back on the air. The bot and the bot master had a trick up their sleeves that the security community had not expected. If the bot is unable to contact its command and control channel, it generates a url mathematically and refers to it for instructions. The bot masters had the URL ready and most of the botnet was picked up again on schedule. I have to applaud our Russian friends for that. Fortunately, it is relatively simple to simulate the loss of a command and control system in the lab so we can anticipate where they will go to next time. I still think that a peer to peer system like Storm used is the way to go in the long term. Oh, and a big hello to my readers at the Washington Post. You heard it here first.
In other news, Apple are now recommending Mac users to install some kind of anti-virus product. Previously, their recommendation was that the threat was insufficient to warrant the potential downside of having an AV solution. The world is getting more dangerous, folks.
Oh, and there seems to be a lot of buzz about an enterprise information security package that contains rootkit like technology in a Chinese written module. Some of the AV vendors are detecting it as malicious. Well, it could be but it is hard to know. Increasingly we see security tools that resemble malware more closely as they try to hide from each other. The malware wants to disable the AV product and the AV product wants to disable the malware. It sounds like the new rootkit uses function redirection so the old Rootkit Unhooker tool should detect it.
Well, back to coding. You have to love feature creep.
Signing off
Mark Long. Digital Looking Glass
Monday, 1 December 2008
A trip down (not much) memory lane
As regular readers of this blog (and thanks to all of you for reading by the way) will know, I debug code, review code and reverse engineer malware. Debugging and security for fun and profit. Well, I find it fun at any rate and it is my business so I take what profit I can in these difficult days. However, I have spent the last few days coding until the small hours which is something that that I don’t generally do that often.
As always, no names and no pack drill. My customer had bought in a solution that was a perfectly good solution except that it was designed to be single user with that one user having compplete control over all aspects of the data. There is nothing wrong with that except that it was needed to work with 70 users, of which 69 would have limited abilities to change the data. I was called in to see if I could make one thing into another.
It was clear from the start that the answer was “No, sorry, not happening”. However, that left my client in the lurch as they were hard up against a deadline. They need a solution and they needed it in a hurry. It had to run on low end XP equipped laptops with older versions of Office and couldn’t require any installation. Oh, and I got the specification (on the back of an envelope) on Friday night and it needed to be running for training on Monday and in production for Tuesday. Clearly, that was going to be a challenge – and it had to match the look and feel of the previous solution.
Tricky, eh? .NET was out because the systems didn’t have the required runtime and installation was a problem. Pure C++? That would do the job but a fully functional system in less than 72 hours? Maybe there were people who could have pulled that off but not me. Java? JVM not installed. This wasn’t looking good. So, it would have to be something where all the required files were part of the OS. Hmmm… MSVBVM60.DLL ships with the OS. ADO ships with the OS. I could write it in VB6, an old, old friend of mine. I wouldn’t have any OCX controls to use but I could write controls in the project if needed. It is a RAD environment and that would help a lot. Yes, I could get the customer out of a bind here.
Ok, I haven’t had a lot of sleep over the weekend but I wouldn’t be writing this if there was still a problem. Yes, it is an old technology. It has its limitations. It got the job done nicely though. I was a bit concerned that I would see repeated reloads across the network from the application EXE (it was a single file run from a share) because the memory would be considered discardable. However, I stopped worrying when I built for release. The executable was 60K long. No, that isn’t a typo. It was less than 64K on disk and even with the recordsets and ADO was still less than 5 MB in memory. 4 Polymorphic forms that pretend to be several more with some control hiding, some validation code, a lot of custom UI code and some fairly unremarkable ADO code and it had a tiny footprint. The customer wanted their logo added (another 6K) and an attractive high resolution icon (64K) bringing the total to just under 128K. I can live with that level of bloat.
There are a lot of cool things about the new languages and for serious development, you have to be impressed. That is not to say that old school doesn’t sometime get the job done just fine.
Signing off
Mark Long, Digital Looking Glass Ltd
As always, no names and no pack drill. My customer had bought in a solution that was a perfectly good solution except that it was designed to be single user with that one user having compplete control over all aspects of the data. There is nothing wrong with that except that it was needed to work with 70 users, of which 69 would have limited abilities to change the data. I was called in to see if I could make one thing into another.
It was clear from the start that the answer was “No, sorry, not happening”. However, that left my client in the lurch as they were hard up against a deadline. They need a solution and they needed it in a hurry. It had to run on low end XP equipped laptops with older versions of Office and couldn’t require any installation. Oh, and I got the specification (on the back of an envelope) on Friday night and it needed to be running for training on Monday and in production for Tuesday. Clearly, that was going to be a challenge – and it had to match the look and feel of the previous solution.
Tricky, eh? .NET was out because the systems didn’t have the required runtime and installation was a problem. Pure C++? That would do the job but a fully functional system in less than 72 hours? Maybe there were people who could have pulled that off but not me. Java? JVM not installed. This wasn’t looking good. So, it would have to be something where all the required files were part of the OS. Hmmm… MSVBVM60.DLL ships with the OS. ADO ships with the OS. I could write it in VB6, an old, old friend of mine. I wouldn’t have any OCX controls to use but I could write controls in the project if needed. It is a RAD environment and that would help a lot. Yes, I could get the customer out of a bind here.
Ok, I haven’t had a lot of sleep over the weekend but I wouldn’t be writing this if there was still a problem. Yes, it is an old technology. It has its limitations. It got the job done nicely though. I was a bit concerned that I would see repeated reloads across the network from the application EXE (it was a single file run from a share) because the memory would be considered discardable. However, I stopped worrying when I built for release. The executable was 60K long. No, that isn’t a typo. It was less than 64K on disk and even with the recordsets and ADO was still less than 5 MB in memory. 4 Polymorphic forms that pretend to be several more with some control hiding, some validation code, a lot of custom UI code and some fairly unremarkable ADO code and it had a tiny footprint. The customer wanted their logo added (another 6K) and an attractive high resolution icon (64K) bringing the total to just under 128K. I can live with that level of bloat.
There are a lot of cool things about the new languages and for serious development, you have to be impressed. That is not to say that old school doesn’t sometime get the job done just fine.
Signing off
Mark Long, Digital Looking Glass Ltd
Friday, 21 November 2008
Encryption - How much is enough, how much is too much?
You might expect me to say that everything should be encrypted to the hilt. Well, that would be overkill. No, the trick is finding the right level of encryption.
I have been asked in the past what would happen if someone came up with an unbreakable code. Would that be game over for Cryptanalysis? Well, I confess that I am not a specialist in crypto but I feel pretty secure in answering this one. No, it would not be game over because there are already unbreakable codes. One time pad codes are unbreakable without the message pad because all possible messages are equally possible – the same cypher text (encrypted version) could decrypt to “Move $1 million to account 43445342” or “I want to buy a painting of a goat” and there is no way to tell which from the cypher text. The way to attack those would be to try to recover the pad - the sequence of nonsense that was used to code the plain text into cypher. That could be a very private thing such as a sheet of rice paper held by only two people in the world and eaten before the attempt by a third party to decrypt the code was made. It could be very public such as letters from a book chosen at random – each day, you advance one page. One of my favourites one time pad codes is the Solitaire Cypher where the order of a pack of cards is used to cypher text. It isn't perfect because the pad repeats but it was a war time favourite because the equipment required was a pencil, a bit of paper and some ordinary playing cards. Shuffle the deck and the key is lost forever.
However, I digress. Popular codes used today are things like 3DES (sometimes pronounced Triple-DES) and AES 128 bit or 256 bit. 3DES is very big in the financial world and replaced single DES. Essentially, 3DES does what DES does 4 times, processing its own output. Are they unbreakable? Not quite. DES is fairly easy to break with the right kit. 3DES would just take longer and require more kit. AES256 would theoretically take many millions or even several billion years to crack with a single desktop system – although the 1.105 petaflop/s IBM supercomputer at Los Alamos National Laboratory might manage it a darn sight quicker. Even with that, the process would, on average, take thousands of years. Does your data need to be safe for that long?
That turns out to be one of the important questions. Imagine you are choosing encryption for a password that will be sent across the wire – and let us ignore the use of hashes for the moment. A password is valid for 1 week and then must be changed. The user can change their own password for 1 week after the old password expires. After that, the help desk have to do it. If the encryption is good enough to stand up for more than 2 weeks, then it is good enough. Making it tougher adds nothing. However, the location of a vault is unlikely to change for hundreds of years. That needs to be secret for a lot longer.
Another important question is how sensitive the data actually is. What I bought on Amazon in the last year? You can see that if you want. A trivial encryption such as ROT13 will do the job here. My interactions with my bank and my lawyer? That is more sensitive. 3DES at least. The launch code for ICBMs? Even if they change fairly often, I think that we should use a good strength cypher on those!
However, there is something about encryption that people often don't consider. It does more than hide information from prying eyes. Imagine that I am running a client that is having a conversation with a server. The request is going over the wire, perhaps via SSL, perhaps via some other scheme. I make a request and the request is coded with a shared secret key that we exchanged at the start of the session – and which is only valid for this session. I get a reply and it is junk until it is decrypted using the shared secret. There is nothing odd about that at all. Millions of systems work that way. So, what would happen if someone tried to hijack the session and inject a new request? Unless they have the shared secret, their request will be decoded into meaningless goo. Since the request probably contains an encrypted copy of some sort of sequence number, it would probably fail at the first hurdle. Knowing the shared secret is a big part of proving that I am still the client that I was at the start of the conversation.
How about if an attacker tries to replay a recording of the conversation without understanding it? The shared secret is generated per session. They have the wrong one so the replay would fail very early. A well designed protocol can protect pretty effectively against session hijacks but there are always people out there looking for even the narrowest gaps to exploit.
What are the downsides to encryption? Well, they are several. It takes time. If you are reading from a disk encrypted with BitLocker, each byte read from disk will cost you around 30 additional CPU cycles – and blow your processor cache and pipeline. Ok, that is not the end of the world but it is a cost. How about data loss though? Bob has excellent data security. All of his files are stored on a machine protected by Truecrypt, all of his mail goes via PGP and all of his ZIP files and documents have strong passwords. If Bob is a paragon of virtue then then risk is that he will be hit by a bus and that data will be lost. That could be very serious indeed. Of course, it might be that Bob is not a paragon of virtue in which case, how would anyone find out?
I recall that the police were not at all happy when BitLocker came out. Several of them at the F3 conference (First Forensic Forum) described it as a paedophile's best friend since it made offline forensics so hard to do. Encryption is a tool and like pretty much all tools, it is morally neutral. It protects good and bad people equally well. Some would argue that those who have nothing to hide need not keep secrets but I am not so sure. If I share my data with (for example) the government because it is not encrypted from them then I am relying on their ability to keep my data as safe as I have or better. Given their past performance on this, I think that I will encrypt it myself, thank you.
Signing off
Mark Long, Digital Looking Glass
I have been asked in the past what would happen if someone came up with an unbreakable code. Would that be game over for Cryptanalysis? Well, I confess that I am not a specialist in crypto but I feel pretty secure in answering this one. No, it would not be game over because there are already unbreakable codes. One time pad codes are unbreakable without the message pad because all possible messages are equally possible – the same cypher text (encrypted version) could decrypt to “Move $1 million to account 43445342” or “I want to buy a painting of a goat” and there is no way to tell which from the cypher text. The way to attack those would be to try to recover the pad - the sequence of nonsense that was used to code the plain text into cypher. That could be a very private thing such as a sheet of rice paper held by only two people in the world and eaten before the attempt by a third party to decrypt the code was made. It could be very public such as letters from a book chosen at random – each day, you advance one page. One of my favourites one time pad codes is the Solitaire Cypher where the order of a pack of cards is used to cypher text. It isn't perfect because the pad repeats but it was a war time favourite because the equipment required was a pencil, a bit of paper and some ordinary playing cards. Shuffle the deck and the key is lost forever.
However, I digress. Popular codes used today are things like 3DES (sometimes pronounced Triple-DES) and AES 128 bit or 256 bit. 3DES is very big in the financial world and replaced single DES. Essentially, 3DES does what DES does 4 times, processing its own output. Are they unbreakable? Not quite. DES is fairly easy to break with the right kit. 3DES would just take longer and require more kit. AES256 would theoretically take many millions or even several billion years to crack with a single desktop system – although the 1.105 petaflop/s IBM supercomputer at Los Alamos National Laboratory might manage it a darn sight quicker. Even with that, the process would, on average, take thousands of years. Does your data need to be safe for that long?
That turns out to be one of the important questions. Imagine you are choosing encryption for a password that will be sent across the wire – and let us ignore the use of hashes for the moment. A password is valid for 1 week and then must be changed. The user can change their own password for 1 week after the old password expires. After that, the help desk have to do it. If the encryption is good enough to stand up for more than 2 weeks, then it is good enough. Making it tougher adds nothing. However, the location of a vault is unlikely to change for hundreds of years. That needs to be secret for a lot longer.
Another important question is how sensitive the data actually is. What I bought on Amazon in the last year? You can see that if you want. A trivial encryption such as ROT13 will do the job here. My interactions with my bank and my lawyer? That is more sensitive. 3DES at least. The launch code for ICBMs? Even if they change fairly often, I think that we should use a good strength cypher on those!
However, there is something about encryption that people often don't consider. It does more than hide information from prying eyes. Imagine that I am running a client that is having a conversation with a server. The request is going over the wire, perhaps via SSL, perhaps via some other scheme. I make a request and the request is coded with a shared secret key that we exchanged at the start of the session – and which is only valid for this session. I get a reply and it is junk until it is decrypted using the shared secret. There is nothing odd about that at all. Millions of systems work that way. So, what would happen if someone tried to hijack the session and inject a new request? Unless they have the shared secret, their request will be decoded into meaningless goo. Since the request probably contains an encrypted copy of some sort of sequence number, it would probably fail at the first hurdle. Knowing the shared secret is a big part of proving that I am still the client that I was at the start of the conversation.
How about if an attacker tries to replay a recording of the conversation without understanding it? The shared secret is generated per session. They have the wrong one so the replay would fail very early. A well designed protocol can protect pretty effectively against session hijacks but there are always people out there looking for even the narrowest gaps to exploit.
What are the downsides to encryption? Well, they are several. It takes time. If you are reading from a disk encrypted with BitLocker, each byte read from disk will cost you around 30 additional CPU cycles – and blow your processor cache and pipeline. Ok, that is not the end of the world but it is a cost. How about data loss though? Bob has excellent data security. All of his files are stored on a machine protected by Truecrypt, all of his mail goes via PGP and all of his ZIP files and documents have strong passwords. If Bob is a paragon of virtue then then risk is that he will be hit by a bus and that data will be lost. That could be very serious indeed. Of course, it might be that Bob is not a paragon of virtue in which case, how would anyone find out?
I recall that the police were not at all happy when BitLocker came out. Several of them at the F3 conference (First Forensic Forum) described it as a paedophile's best friend since it made offline forensics so hard to do. Encryption is a tool and like pretty much all tools, it is morally neutral. It protects good and bad people equally well. Some would argue that those who have nothing to hide need not keep secrets but I am not so sure. If I share my data with (for example) the government because it is not encrypted from them then I am relying on their ability to keep my data as safe as I have or better. Given their past performance on this, I think that I will encrypt it myself, thank you.
Signing off
Mark Long, Digital Looking Glass
Monday, 17 November 2008
Ooh, ooh, ohh, ohh, Staying Alive!
Ah, who can forget the BeeGees? I try and try. No, there is a point to the title of this blog entry. If you work with computers (a fairly safe assumption if you read this blog) then you will doubtless be familiar with the casual “You know, my computer has been acting weird. Would you mind having a look at it?”. There is a song by Tom Smith called “Doing tech support for Dad” about it. Guess what I did at the weekend? Sometimes I am lucky and the person has some interesting malware. In this case, it was interesting greyware.
Now, is greyware a class of malware? Back at Microsoft, the lawyer approved phrase was “Potentially unwanted software” because it was often software which had been installed after the user agreed to some Eula that said on page seven that it might just send your details of your web usage to a server somewhere and might show you ads for products of dubious authenticity. The lawyer’s position is that you can’t call it malware if the user agreed to install it.
So, what did we have here? A typical family system running XP Home edition, not too much memory and an older specification with all members of the family being admins on the system. Under the circumstances, the machine was remarkably clean. It was running a free AV product that had picked up that one of the DLLs loaded into every process was dodgy but every time it tried to fix it, it failed.
I spent a good few hours looking at this particular greyware (and for legal reasons, no names will be given here) and it was a resilient little devil. I would like to talk about some of the tactics that it used. However, before I do that, I would like to talk about coding styles in malware.
There are some fairly distinct styles in malware writing. The Script Kiddie and those just up from there typically lash components from different sources together into a crude botch and you can’t tell much about the kiddie. Eastern Europeans black hats are quite workman-like and the code quality is generally pretty good. They have clearly had formal training. They often borrow ideas off other malware writers, possibly those working for the same stable but I suspect that they pinch ideas off rival gangs just as often. They keep up with modern trends or set them. They generally write stealthy code with some excellent use of rootkits. Conversely, they do relatively little to hide their infrastructure and looking at the network activity generally takes you to Russia or the Ukraine in fairly short order. That could well represent a difference between the developers and the money men who coordinate gang activities. I am told that military malware from Eastern Europe follows the same patterns but it is better engineered and doesn’t lead as directly back to Eastern Europe. I have only seen a fairly limited range of military malware from the Middle East but the quality was excellent and the stealth features were even better than the Eastern European code. They clearly worked in teams with subject matter experts writing different bits of the code. A lot of money had been spent on those projects. Chinese malware uses a very different approach. It rarely has much stealth capacity. Instead, it overwhelms by sheer weight of numbers. If two variants of the code are good, then ten are better. If one protection mechanism is good, then five are better. I am told by friends who move in places where true names are rarely given and all the players work for organisations known only by 3 letter acronyms that Chinese espionage works in very much the same way. Ten agents watching are better than two.
Anyway, I digress. This greyware proved to be Chinese and I had guessed as much from the approach. The directory where it lived was visible which made life easy… well, actually, not so much. Any attempt to delete the directory failed with a sharing violation if it was a code component – oh, I may just call any such files “PE files” which stands for Portable Executable. This covers any sort of file that can be loaded as run as native code. So, something was locking the files. A quick search showed a process that was loaded from the directory that the other known files were from so I tried to kill it with task manager but it wouldn’t die. Ok, time for the toolbox to come out. Although Sysinternals is wholly owned by Microsoft, the tools are still free and wonderful. I downloaded them and Process Explorer killed the process just fine. It was offline for less than 5 seconds before it popped up again. A check of the parent process showed it to be an instance of SVCHOST. Right, it was time to look at the services.
There were a couple of services that seemed to be stopped… how could a stopped service be doing this? I downloaded WinDbg and had a look at the service host for that service and clearly it was not stopped. I am going to look into this technique some more when I have time but it is clear that the SCM was sending service control messages which the service claimed to be processing but the status codes that it returned were out and out lies. However, that was not a problem. I could force terminate the containing service. It popped back up again, spawned by another instance of SVCHOST. Ah, ok, I had seen that trick before. Two processes each have a thread that waits on the death of its brother process. If you kill one then the thread unblocks, restarts its brother process and blocks again. The brother does the same. I knew how to deal with that thanks to Mark Russinovich, a very clever and helpful chap who it was my pleasure to meet once or twice. You can suspend all the threads in a process and that doesn’t trigger the brother process – after all, the monitored process is only sleeping, not dead. You suspend the other process and you have two frozen malicious processes. I went into the registry and killed the startup for those services and rebooted.
What the heck? Everything was back as it had been. Some investigation showed that there was a process that “repaired” the installation of the malware on each boot and then terminated. Ok, not a problem. I froze everything and used Autoruns to disable the loading of the process. Reboot – and everything is back as it had been. Resilient little sucker, isn’t it? Some ferreting around showed that this greyware registered as a shell extension and may well have had some shell functionality but the first thing that it tried to do was repair the install. It was at this point that I realised that this was more interesting than average. I started to dig deeper.
COM classes were registered with multiple different class IDs. Whichever you asked for, you got the same VTABLE. Cute. There were multiple self repair mechanisms and hooks into the system which seemed to exist solely to give the greyware a chance to self repair. Nice idea. The one that made me laugh was the protection for non-PE files. Something was waiting on each file in the directory and as the file was deleted, it just copied the file from the complete backup directory that was right there in plain sight. What happened if you tried to kill the backup directory? It was restored from the live copy.
So, the approach was clearly Chinese but the modules were compiled in Visual Studio with US settings. I was able to fish out some function names and other text and the writer clearly had a very good grasp of English. The servers that sourced the ads were in mainland China and some of the reporting went to Taiwan. All in all, this was pretty good work and much more resilient than most. There was no way that an average admin would have been able to remove this software.
In the end, I cleaned the system by booting to a WinPE image and manually cleared out the registry and deleted the directories that contained the greyware. Even the best self defence mechanisms don’t work when they are not loaded.
Had it been a commercial system, it would probably have made more sense to salvage the data and rebuild the box.
Oh, in other news, Arbor Networks say that there have been more and heavier distributed denial of service attacks this year than ever before with a peak intensity 67% above the previous high. The source? That would be Botnets… generally compromised home systems just like the one that I worked on this weekend.
So, until next time…
Singing off
Mark Long, Digital Looking Glass
Now, is greyware a class of malware? Back at Microsoft, the lawyer approved phrase was “Potentially unwanted software” because it was often software which had been installed after the user agreed to some Eula that said on page seven that it might just send your details of your web usage to a server somewhere and might show you ads for products of dubious authenticity. The lawyer’s position is that you can’t call it malware if the user agreed to install it.
So, what did we have here? A typical family system running XP Home edition, not too much memory and an older specification with all members of the family being admins on the system. Under the circumstances, the machine was remarkably clean. It was running a free AV product that had picked up that one of the DLLs loaded into every process was dodgy but every time it tried to fix it, it failed.
I spent a good few hours looking at this particular greyware (and for legal reasons, no names will be given here) and it was a resilient little devil. I would like to talk about some of the tactics that it used. However, before I do that, I would like to talk about coding styles in malware.
There are some fairly distinct styles in malware writing. The Script Kiddie and those just up from there typically lash components from different sources together into a crude botch and you can’t tell much about the kiddie. Eastern Europeans black hats are quite workman-like and the code quality is generally pretty good. They have clearly had formal training. They often borrow ideas off other malware writers, possibly those working for the same stable but I suspect that they pinch ideas off rival gangs just as often. They keep up with modern trends or set them. They generally write stealthy code with some excellent use of rootkits. Conversely, they do relatively little to hide their infrastructure and looking at the network activity generally takes you to Russia or the Ukraine in fairly short order. That could well represent a difference between the developers and the money men who coordinate gang activities. I am told that military malware from Eastern Europe follows the same patterns but it is better engineered and doesn’t lead as directly back to Eastern Europe. I have only seen a fairly limited range of military malware from the Middle East but the quality was excellent and the stealth features were even better than the Eastern European code. They clearly worked in teams with subject matter experts writing different bits of the code. A lot of money had been spent on those projects. Chinese malware uses a very different approach. It rarely has much stealth capacity. Instead, it overwhelms by sheer weight of numbers. If two variants of the code are good, then ten are better. If one protection mechanism is good, then five are better. I am told by friends who move in places where true names are rarely given and all the players work for organisations known only by 3 letter acronyms that Chinese espionage works in very much the same way. Ten agents watching are better than two.
Anyway, I digress. This greyware proved to be Chinese and I had guessed as much from the approach. The directory where it lived was visible which made life easy… well, actually, not so much. Any attempt to delete the directory failed with a sharing violation if it was a code component – oh, I may just call any such files “PE files” which stands for Portable Executable. This covers any sort of file that can be loaded as run as native code. So, something was locking the files. A quick search showed a process that was loaded from the directory that the other known files were from so I tried to kill it with task manager but it wouldn’t die. Ok, time for the toolbox to come out. Although Sysinternals is wholly owned by Microsoft, the tools are still free and wonderful. I downloaded them and Process Explorer killed the process just fine. It was offline for less than 5 seconds before it popped up again. A check of the parent process showed it to be an instance of SVCHOST. Right, it was time to look at the services.
There were a couple of services that seemed to be stopped… how could a stopped service be doing this? I downloaded WinDbg and had a look at the service host for that service and clearly it was not stopped. I am going to look into this technique some more when I have time but it is clear that the SCM was sending service control messages which the service claimed to be processing but the status codes that it returned were out and out lies. However, that was not a problem. I could force terminate the containing service. It popped back up again, spawned by another instance of SVCHOST. Ah, ok, I had seen that trick before. Two processes each have a thread that waits on the death of its brother process. If you kill one then the thread unblocks, restarts its brother process and blocks again. The brother does the same. I knew how to deal with that thanks to Mark Russinovich, a very clever and helpful chap who it was my pleasure to meet once or twice. You can suspend all the threads in a process and that doesn’t trigger the brother process – after all, the monitored process is only sleeping, not dead. You suspend the other process and you have two frozen malicious processes. I went into the registry and killed the startup for those services and rebooted.
What the heck? Everything was back as it had been. Some investigation showed that there was a process that “repaired” the installation of the malware on each boot and then terminated. Ok, not a problem. I froze everything and used Autoruns to disable the loading of the process. Reboot – and everything is back as it had been. Resilient little sucker, isn’t it? Some ferreting around showed that this greyware registered as a shell extension and may well have had some shell functionality but the first thing that it tried to do was repair the install. It was at this point that I realised that this was more interesting than average. I started to dig deeper.
COM classes were registered with multiple different class IDs. Whichever you asked for, you got the same VTABLE. Cute. There were multiple self repair mechanisms and hooks into the system which seemed to exist solely to give the greyware a chance to self repair. Nice idea. The one that made me laugh was the protection for non-PE files. Something was waiting on each file in the directory and as the file was deleted, it just copied the file from the complete backup directory that was right there in plain sight. What happened if you tried to kill the backup directory? It was restored from the live copy.
So, the approach was clearly Chinese but the modules were compiled in Visual Studio with US settings. I was able to fish out some function names and other text and the writer clearly had a very good grasp of English. The servers that sourced the ads were in mainland China and some of the reporting went to Taiwan. All in all, this was pretty good work and much more resilient than most. There was no way that an average admin would have been able to remove this software.
In the end, I cleaned the system by booting to a WinPE image and manually cleared out the registry and deleted the directories that contained the greyware. Even the best self defence mechanisms don’t work when they are not loaded.
Had it been a commercial system, it would probably have made more sense to salvage the data and rebuild the box.
Oh, in other news, Arbor Networks say that there have been more and heavier distributed denial of service attacks this year than ever before with a peak intensity 67% above the previous high. The source? That would be Botnets… generally compromised home systems just like the one that I worked on this weekend.
So, until next time…
Singing off
Mark Long, Digital Looking Glass
Subscribe to:
Posts (Atom)