Quantcast
Channel: Hacking Exposed Computer Forensics Blog
Viewing all 877 articles
Browse latest View live

Daily Blog #133: Sunday Funday 11/3/13

$
0
0
Hello Reader,
           Another fun week, I got to speak at Bsides DFW yesterday and reach out to our infosec brethren and spread the good DFIR word. I gave a write blocker and a book as a door prize and someone mentioned that a writeblocker would be a very tempting Sunday Funday prize so here we go! This week's challenge focuses on terminal services accesses and their artifacts.

The Prize:

The Rules:
  1. You must post your answer before Monday 11/4/13 2AM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post

The Challenge:
A shared Windows 2008 R2 terminal server was setup allowing employee's to work from home without requiring VPN access. On that server several files used by a department suddenly got deleted and no one is taking responsibility. What would you do to determine what user deleted the files with the assumption that they RDP'd in to do so.

Daily Blog #134: Sunday Funday 11/3/13 Winner!

$
0
0
Hello Reader,
           Another Sunday Funday has come and gone with a range of good responses to choose from. Choosing this weeks winner was hard as I had some good submissions that went into depths on different parts of their investigative process. I choose this weeks winning answer over the other submissions due to its good details, I'm a sucker for screen shots and focus on deletion related activity. This was a tough choice and I look forward to making more tough choices as the answers to these contests keep getting better!

The Challenge:
A shared Windows 2008 R2 terminal server was setup allowing employee's to work from home without requiring VPN access. On that server several files used by a department suddenly got deleted and no one is taking responsibility. What would you do to determine what user deleted the files with the assumption that they RDP'd in to do so?

The Winning Answer:
Darren Windham

1.) First I would review the $Recycle.bin folder for the volume where the files were stored. By default these will be hidden and you will need to change your explorer settings to show hidden/system files.

2.) For my testing I created a file in D:\Sunday Funday\ called deleteme.txt and deleted the file via an RDP session.

3.) Using an admin command prompt I then went to the D:\$RECYCLE.BIN folder and did a dir /s /b and got the following:

4.) Here we can see the %SID% folder where %SID% is the SID of the user that performed the deletion and there are two files where the original file has been renamed to $R and some random characters. We can also see a similar named file but starting with a $I that contains the the original directory name, date/time deleted and size but this file is not plain readable text as shown below

5.) Some commercial tools like Encase and FTK can parse these to readable text but you can also use a hex editor and the following file structure
Bytes 0-7: $I File header – always set to 01 followed by seven sets of 00.
Bytes 8-15: Original file size – stored in hex, in little-endian.
Bytes 16-23: Deleted date/time stamp – represented in number of seconds since Midnight, January 1, 1601. Use a program such as Decode to assist with figuring out the exact date/time, if you don’t want to do the math
Bytes 24-543: Original file path/name.

6.) Then to tie the SID to a specific user we can look in this case since it is not a domain server at the registry key HKLM\Software\Microsoft\Windows NT\CurrentVersion\ProfileList we can see what user account has the -1001 SID (HECF)

7.) You can also review windows security event logs for event ID type 4624 for successful login around the time in question and look for a successful login from HECF
I do need to give credit to Derek Netwon and this blog post of his that had some great info on the newer (post INFO2 file) recycle bin. http://dereknewton.com/2010/06/recycle-bin-forensics-in-windows-7-and-vista/

End of Winning Answer
There you have it, another Sunday Funday ended and good information learned not only from the winning answer but public submissions as well like the great one from Harlan Carvey.

Daily Blog #135: Converting MHT to PDF

$
0
0
Hello Reader,
             I've spent most of day today trying to get data into a format that someone else can review. I'm using a tool called X1 Social Discovery which allows you to download, index and manage social media information from a variety of sources (Twitter, Facebook, Linkedin, etc...) but its ability to export the data in a form my client actually wants to review but still contain all the data he needs to review appear to be two different things. In the end the best view of the data I could find would create separate MHT files for each of the social media updates I captured. Herein lied today's frustration.

After exporting many a MHT file I hoped that I would be able to convert these with Adobe Acrobat, but it defaulted to opening the documents in Microsoft Word which did not render the MHT correctly. I attempted to combine the documents but MHT is not a supported file type for combining within Acrobat. I tried some programs that claimed to convert MHT to PDF's only to find they didn't embed the images within the MHT pages, just the HTML. I attempted to change the print to association of the file type to find that Opera (the browser that rendered them the best) does not have a print command line option.

After many hours of Google searching and the lament of similar people trying to accomplish the same task I found a blog that talked about a tool called VeryPDF HTML Converter. I grabbed a copy and sure enough, it worked! So now I will be able to deliver consolidated PDFs over numerous MHT files for review and go home.

I wanted to post this blog article, even though its outside the range of what I normally blog about, in the hopes that if you someday get stuck in this situation will find it and be able to solve your problem like I did. VeryPDF HTML Converter has a trial that allows 300 conversions and a license of the GUI only costs $59.

I hope this saves someones day.

Daily Blog #136: Using Win 2008 server task scheduler logs to identify interactive logins

$
0
0
Hello Reader,
            In a prior Sunday Fundays we've talked about tracking logins to a Windows Server 2008 system and in each case I saw the normal security event log entries referenced. Today I wanted to expand on that knowledge with something I found in a case a couple years ago and mentioned in a Sunday Funday answer post, tracking logins with Task Scheduler logs.

You can find the Task Scheduler log in the event viewer GUI under:
Application and Services Logs -> Microsoft -> Windows - > TaskScheduler -> Operational

and on the disk under:
%SystemRoot%\System32\Winevt\Logs\Microsoft-Windows-TaskScheduler%4Operational.evtx

The log is separated from the other Windows event logs which is very helpful as it prolongs the lifespan of the logs compared to prior versions of Windows servers. Inside the event log you'll find a log for each execution of a scheduled task here which is useful in its own right if you are looking for bad actions from malware or a user. In addition you'll find EventID 119, 'Task triggered on logon'. The reason this event entry is so useful is that by default there are tasks that get executed on every interactive logon so these event entries should show up for every interactive session without any configuration or additional security needed.

The event entries look like this:

While the timestamp of 2:27:58PM is not going to be the exact second the user authenticated it is triggered soon after.

I use these logs to get farther visibility into who has logged in than the security event logs normally host but they don't reveal the IP address of the user.

Daily Blog #137: Finding new artifacts - Re-creation Testing Part 1

$
0
0
Hello Reader,
          One of the things that in my opinion makes an examiner better at digital forensics is the ability to re-create events, create test scenarios and possibly find new artifacts. The best way to do that is through recreation testing and its something we do in the lab quite often. The premise is simple and there is some things you can do ahead of time to make your life easier.

Step 1. Determine which operating systems you have in your environment to test. 


In my lab we could receive anything so we have fresh install virtual machines from Windows 95-Windows 8 and all the server variants. For this kind of work a MSDN license is very, very helpful but some of the older operating systems are no longer on MSDN so we turned to ebay to fill the gaps in getting install media. We also have some virtual machines for OSX/Linux but they are not used as frequently as they make up a smaller percentage of casework.

In your lab if you work within a company your first step is to determine what operating system versions you make use of internally and then get to work putting them into a base state of what would be on your standard corporate image. If your company as a 'gold image' or a production image that your IT personnel deploy to new equipment that's even better to use as your test bed.


Step 2. Determine which virtual drive standard you will use

You have a lot of options these days in virtual drive image files. VMDK, VDI, VHD and other formats are all out there and usable. For our testing I have a preference for VHD and I'll explain why.

1. VHD has cross platform support without the need to install the virtual machine that created it. FTK Imager in windows supports VHD, Windows 7 and up can mount VHDs a local physical disks and in Linux/OSX through Joachim Metz's libvhdi http://www.forensicswiki.org/wiki/Libvhdi amongst other tools.

2. VHD allows for file systems to be mounted read only natively in Windows/Mac/Linux without additional software.

3. Cross support from multiple virtual machine vendors (Hyper v, virtual pc, vmware, virtual box)

Now you can pick any virtual drive format you'd like but those are the reasons I chose VHD.

Step 3. Determine which virtual machine software you will use


For many people this answer defaults to Vmware. I like Vmware but for my day to day VM creation I have migrated to Virtualbox. Virtualbox is free, it supports a wide range of operating systems and it has a pretty low overhead. Whatever virtual machine software and drive format you choose is really up to your preference, they all should generate the exact same test data. The important thing is to standardize so that you can easily pick up your work or switch operating systems with little effort.

Step 4. Determine how you will compare changes within the system over time


This may seem obvious to many of you who work with virtual machines regularly and are thinking about snapshots, there is just one problem with that. When you create a snapshot all the changes to the disk are stored in snapshot overlay files and not the underlying disk image. Your forensic tools do not support snapshot overlays (I don't know of one that does at least) and you base image won't get updated leading you to conclude that nothing has changed.

Instead you'll need to follow one of three scenarios:

Scenario 1 - Capture the full drive
After each change image to compress the data or just copy the virtual drive to a separate folder after suspending the system. This will give you all the data that has changed on the disk but leaves you with the unfortunate task of trying to diff whole images.

Scenario 2 - Capture artifacts of interest
If you are interested in changes occurring to artifacts you know about it may be enough to just extract the artifacts after each change. However, when do you do this make sure to capture it two ways. Once from the running system to capture any settings that may get purged on shutdown and again after shutdown to get all the keys/files that may not be accessible on the file system until TxF and TxR are committed.

Scenario 3 - Run a system comparison tool
For trying to identify possible locations of interest this is where I typically start. My comparison tool of choice is called SysTracer Pro from blueproject.ro, http://blueproject.ro/systracer/download. I like this tool because it will quickly capture the state of all registries and files within the virtual machine and allow you to compare between any two snapshots with full details of what changed. The 'pro' version even has a remote service so you can collect snapshots from the running system without having to add more artifacts by executing anything within the user profile.

That's all for today, I'll continue this with my methodology next week. Tomorrow is the Forensic Lunch at a special time of 2pm Central so make sure to tune in!

Daily Blog #138: Forensic Lunch 11/8/13

$
0
0
Hello Reader,
         We had a great forensic lunch today! Shery Falk of Winston & Strawn joined us to talk about dealing with the legal side of breaches, Jonathan Rajewski of Champlain College talked to us about their undergraduate and graduate programs and Matthew showed us a demo of the up and coming ANJP v3 and all the cool stuff that awaits you in it. Please give it a watch and let us know what you think! If you want to be on the lunch just shoot me an email dcowen@g-cpartners.com and I'd love to have you on.

Sheryl Falk sfalk@winston.com
Jonathan Rajewski jtrajewski@champlain.edu


Daily Blog #139: Saturday Reading 11/9/13

$
0
0
Hello Reader,
        It's Saturday! Time for another collection of links to make you think. I will most likely be going to go see Thor today but after that I will be back to my forensicy ways. Here is this weeks set of good reads.

1. The Forensic lunch, always free of trans fats, happened again this Friday. This week Sheryl Falk of Winston & Strawn, Jonathan Rajewski of Champlain College, Matt and myself got down to business discussing the legal side of breaches, undergraduate and graduate degrees in computer forensics and big changes coming to the v3 beta of ANJP aka the triforce. Make sure to give it a watch! http://www.youtube.com/watch?v=GuUEyZw3hRo&list=PLzO8L5QHW0ME1xEyDBEAjmN_Ew30ewrgX&index=16

2. Over on the Magnet Forensics blog Jad has a nice writeup on his journey into the dark web and his research into recovering bitcoin artifacts, http://www.magnetforensics.com/bitcoin-forensics-a-journey-into-the-dark-web/. You may not currently have a case relating to tor or bitcoin, but its not going away so be prepared.

3. If you are interested in Plaso the python based log2timeline replacement or python DFIR development in general go over to Kristinn's blog and watch his video/follow his links to see what you missed at the Open Source Digital Forensics Conference http://blog.kiddaland.net/2013/11/osdf-conference-links.html.

4. If you watched last weeks forensic lunch then you'll know one of the things we lamented was a lack of an open source tool that would recover deleted data from SQLite databases. Well Mari DeGrazia has stepped up to the challenge and posted a blog entry with her tool, http://az4n6.blogspot.com/2013/11/python-parser-to-recover-deleted-sqlite.html. Well done Mari! Maybe you can come on the lunch and talk about it?

5. Lee Reiber has a good post up going through how to analyze Whats App, a very popular mobile messaging program http://blog.mobileforensicsinc.com/whats-up-with-whatsapp/. Lee works at Accessdata and does a lot work with MPE+ and their mobile training program there.

6. Over on the hexacorn blog there is a good introduction into the forensic implicated of Microsoft SCCM, http://www.hexacorn.com/blog/2013/11/01/sccm-system-center-configuration-manager-and-incident-response/. I'm a big fan of making use of SCCM and I plan to write some blogs regarding how to mine it for last logins and machine discovery.

7. Harlan has a new post up on his experience at OSDFC and OMFW, http://windowsir.blogspot.com/2013/11/conferences.html. I have a different perspective than Harlan does in regards to getting DFIR discussions going at conferences, but that may be because our choice of topics.That or I just keep talking about forensics until someone joins in.

That's all for this week, come back tomorrow for another Sunday Funday!

Daily Blog #140: Sunday Funday 11/10/13

$
0
0
Hello Reader,
         We've been talking about timestamp changes and other methods of hiding activity this week, I thought I would add in a challenge that covers a bit more basic anti forensic technique. I hope you like this weeks scenario challenge and prepare yourself for another full image challenge next week.

The Prize:
  • A 4TB External USB3 Seagate Backup Plus Hard Disk

The Rules:
  1. You must post your answer before Monday 11/11/13 2AM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post

The Challenge:
 You have a forensic image Windows Server 2008 R2 system that the former administrator installed LogMeIn on. You have been told that your client suspects that the former administrator has logged in remotely and shutdown database services preventing the company's webstore from functioning. When you go to review the LogMeIn logs you notice they have been deleted from the system. 

Where can you look on the system, other than the free space of the disk for the logs, to determine when the ex-administrator used LogMeIn to access the system.

Daily Blog #141: Sunday Funday 11/10/13 Winner!

$
0
0
Hello Reader,
       This was a popular contest, I got a lot of submissions and the majority of you clued into the event logs that LogMeIn generates in the Application event log! This is what I was hoping you would research and find, but this weeks winner went beyond the Application event log and found some interesting artifacts that I wasn't aware of! So this week we salute Doug Collins who earned his prize.

The Challenge:
You have a forensic image Windows Server 2008 R2 system that the former administrator installed LogMeIn on. You have been told that your client suspects that the former administrator has logged in remotely and shutdown database services preventing the company's webstore from functioning. When you go to review the LogMeIn logs you notice they have been deleted from the system.

Where can you look on the system, other than the free space of the disk for the logs, to determine when the ex-administrator used LogMeIn to access the system.

The Winning Answer!:
Doug Collins
The first place to look is for the low hanging fruit. First, I’m going to look in the $Recycle.Bin for the deleted files, then I’ll move on to the volume shadows. The ex-admin may have deleted the log files; but it is unlikely that he cleared out the shadow copies of that folder. I will mount the image of the drive using Encase PDE and use the vssadmin tool to find the shadow files of interest. Those files will be linked to my forensic box and parsed for the last instance of the log file.

C:\ProgramData\LogMeIn\LogMeIn.log.

If that fails, the next stop will be the event logs. LogMeIn creates a number of event upon each log in attempt or success.
  •  Application Log
    • EventID 102 – Source LogMeIn
      • User WIN-XXX\dc has successfully logged on from IP address AA.BB.CC.DD Secure (SSL) Connection: Yes
    • EventID 202 – Source LogMeIn
      • Remote Control session started for user WIN-XXX\dc from IP address AA.BB.CC.DD. The interactive user (if present) has not been asked for a confirmation.
    • EventID 9011 – Source Desktop Window Manager
      • The Desktop Window Manager was unable to start because a mirroring driver is in use.
  • Security Log
    • EventID 4648 – Source Microsoft Windows security auditing
      • A logon was attempted using explicit credentials.
      • Process Name: C:\Program Files (x86)\LogMeIn\x64\LogMeIn.exe
    • EventID 4624 – Source Microsoft Windows security auditing
      • An account was successfully logged on.
      • Process Name: C:\Program Files (x86)\LogMeIn\x64\LogMeIn.exe
  • PrinService
    • EventID 823 – Source Microsoft-Windows-PrintService/Admin
      • The default printer was changed to Samsung ML-3050 via LogMeIn,winspool,Ne02
The SOFTWARE hive in the registry will reveal some information, related to the last time that the LogMeIn software was used. It won’t be a detailed list of logins, but it will match back to the last login made with the software. Specifically, the HKLM\SOFTWARE\LogMeIn\V5\Net\NATUDP gets updated when connections are made.

The last place I would look, before carving for deleted log files, would be to take a look at the prefetch files. It won’t give me as detailed a list of logins as the other places; but I will be able to determine the last login attempt. It looks like at least two prefetch files are created.

  • C:\Windows\Prefetch\LMIGUARDIANSVC.EXE-ABCD1234.pf
  • C:\Windows\Prefetch\LOGMEIN.EXE-ABCD1234.pf

However, it is unlikely these will be on this system, as Superfetch is disabled on Windows Server 2008 by default.

Daily Blog #142: Finding new artifacts - Re-creation testing part 2 Isolation and Uniqueness

$
0
0
Hello Reader,
          As I write this I'm on a flight to PFIC where I will be speaking on our further research into file system forensics. PFIC is a fun conference as its big enough to get a critical mass of people but small enough to allow for easy conversation. I'm looking forward to doing some demos and talking tech in the upcoming week. If you are at PFIC please don't hesitate to come up and say hi, it's always nice to know that the view count that I watch to determine if anyone is reading is more than just web crawlers :)

Today I wanted to continue the finding new artifacts post and get more into what we do. This is not the only way to do things, but its a set method that has been successful in my lab and lead to most of the research you've read on this blog and in the books. I'm currently typing on my surface so this won't be the longest of posts, but I wanted to cover the concepts of Isolation and Uniqueness today.

Isolation 

When I say isolation here I don't mean process isolation, air gaping or any other standard method. I mean trying to isolate as much as possible what your testing versus what the operating system is generating in the background, When we first started our file system journaling research we did so on the main system disk within our virtual machine. Doing this lead to mass confusion because we couldn't determine where in the unknown data structure we were trying to decode our changes were located versus what the underlying system was changing in its background actions.

We solved this issue by creating a separate disk and partition where the only actions taking against it was ourselves and the file system drivers. Once we knew that all the changes in the data structure were reflected my our changes it was much easier to find patterns and timestams.

I've since taken this method of isolation and applied it whenever possible, always trying to move whatever programs/files/methods I'm testing to a non system disk not shared with any other test that i'm doing. When I do this I find my results are more reliable and they come quicker as well. I know reading this it may seem obvious, but you really never understand just how much activity is going on in the background by the operating system until you try to go through every change looking for your test results.

Uniqueness

The concept of uniqueness applies to what you name the things you test with. The idea is that every directory, file, program, dll you create/call/reference should have a name unique enough that if you search for it that you won't find any false positives. If you are going to run multiple tests in sequence its equally important for those test runs to identifiable to what test its part of. For instance lets say you are testing a system cleaner (ccleaner for instance) to determine what it does when it wipes a file. You would want to create a test plan where you document:
  • Each of the combination of options you are going to try
  • The operating system version and service pack you are testing
  • Which file system you are testing
  • What version of the program you are testing
  • The name of the file and directory you wiped
    • An example being UniqueFileToBeWipedTest1
  • The time to the second when you executed the test
  • The time to the second when the processes ended
With these facts at hand you can easily isolate the changes you are making in those times from other tests and know which files are being effected by your testing. The worst thing that you can do is not document your testing well, causing your results to be either unverifiable to another examiner and making you spend all the time to recreate all your work.

That's all for today, I want to continue this topic this week going into what we do to test, how we pick our tests and the tools we use to isolate results.

Daily Blog #143: PFIC Day 1 Morning Sessions

$
0
0
Hello Reader,
          I'm attending PFIC and trying to be a good attendee and attend all the sessions when I'm not running our for call. I thought I would pass on my notes on these sessions and then post the slides they are associated with for those of you who couldn't make it.

8am Session - Amber talking about trends in mobile forensics

Shows real data from kids cell phones, its a form of punishment
Windows phone acquisition is limited to local device data, cloud storage is currently out of reach
To acquire a windows phone you need to install an app from the marketplace

9am session - James Wiebe - remote forensic acquisition

a review of what we now know about nsa capabiltities via snowden
Beyond the front end server most providers pass decrypted traffic between their nodes
apologizes that this is a talk focused on a product cru is selling but wants to try to educate beyond it
I think Eric Zimmerman needs to get a ditto and test it to see if their speed claims match up to what he's seen
Ditto is an embedded linux system, but they don't use the ntfs-3g fuse driver and thus avoid the performance penalty we saw in our testing
Optional battery allows it to run for 7 hours of imaging, thats cool
They've implemented lightgrep into their embedded device and are using it for carving, I would assume they are using it for searching as well. Remote live triage is the goal.
Currently on sale for $1,649 from forensiccomputers.com, not a low cost option

Just a note here, surface is my go to device to take with me for conference notes now.

10:30am session "eDiscovery Overview for Forensic Examiners"

Data mining and mapping against email to find patterns or criteria to find interesting/relevant data.
Case law shown about various expert rulings in how experts were used
Review of challenges in defending ediscovery searches
Review of challenges in attacking ediscovery searches
I had to leave the session at this point to take a client call. 

11:30am session Google Glass Forensics

Start with glass v1
Review of what google glass is/does
Review of the hardware and specifications
Showed glass v2
walking through future glass apps and forensic data implications
Introduction of 'shattered' an open source forensic project from champlain
Current version scrapes user accessible data, next version will root the device for physical images and more data
showing how images are saved and timestamped
photos have exif
two thumbnails are also generated, filename meaning is unknown
adds an entry to usagestats
shattered script file 'logcat.txt' also shows a picture was taken, the timestamp of the log should match the name of the image taken and the exif data
Calls are also logged to these logs
map requests and cached direction information are stored
bluetooth logging includes mac addresses of devices connected to
wifi logging of access points and mac addresses in range
each glass activation and method of activation is logged
example images will be posted soon to allow testing and research

more to come as a I sit through the 2:30pm session!

Daly Bog #144 PFIC Day 1 Afternoon Sessions

$
0
0
Hello Reader,
          I took more notes yesterday and I'm taking more notes this morning. I'm posting these in the hopes that you'll use them with the slides that will be posted so you can get the information presented here that is outside of the slides.

1:30pm Session Social Media Insights

This session was being presented by a woman whose company provide PI services and specifically online research about people and companies.
Websites and techniques for social media investigations
Finding links and images through duckduckgo
Finding discussion posts on omgili
Finding classified ads with searchtempest
Using ixquick to search multiple engines and find hits in the 'private web' with a meta search
I left for a conference call at this point, if you need to do online/social/web investigations of prior post this presentation does give some good links.

2:30pm Session Augmented Reality Forensics

This wasn't a forensic session per se, its more of a futurist looking at the state of upcoming and emerging technology and what that may mean for us in the DFIR field.. Still an interesting talk from a good presenter.
AR isn't perfect yet
will make a new range of forensic tools and forensic possibilities
The internet of things is coming and with it IoT forensics

4:00pm Session Chip Off or Jtag it

This was an interesting session mainly because of Zeke's personality but the tech content was a bit light for a conference that also had a hands on chip off lab taking place a few doors down.
 
New Zeland accents make presentations more interesting
Good jokes so far, hoping the content is as good
Overview of crimes committed and the evidence that could be found on mobile devices
Terrorists are now shooting their phones before being caught, apparently you should target the number 7 key to kill the sim and possibly the nvram chip.
Starting his review of forensics with Edmond Locard 'every contact leaves a trace'
now comparing computer v cell phone forensics, I'm going to be patient but I'm wondering if he is misjudging his audience.
A little vendor bashing on how they market their logical/physical analysis, always appreciated
'forensic explorer' is now called 'recover my files' which is around $1,000 USD and he has had good success in carving from android unallocated. Not sure how that compares to any other carving tool against the same data.
Flasher boxes are hacker boxes and break into devices? I don't think I agree with the analogy but I understand the meaning.
Discussion on if these procedures from a flasher box, jtag, chip off, and even vendor software tools are forensically sound since many are modifying the original evidence in order to extract the data
The process is what makes something forensically sound not the tool, I agree
He is now going over photographing a phone as a first step before cracking it open.
Now getting into something interesting, a survey of flasher boxes
The comedy here is winning the audience, enjoying this
Now discussing chip off, and discussing heat versus infrared for removing chips
Why would we go to chip off, because the phone isn't supported by any automated forensic software tool
Some phones, especially the off market clone phones (fake blackberry in this example) may appear normal but will actually be encrypted or having multiple sub systems making normal chip off unhelpful or pointless
Next example is a phone that looks like a remote car key fob
moving on to physically damaged phones
The speaker seems to think that Jonathan Rajewski and I are part of a Utah based cell phone forensics lab.  We can't bear to tell him we are not so we are going along with it.
Regardless of content come see Zeke just for the jokes
Sometimes repair is all you need to do instead of a chip off
Now showing generic best practice guidelines for the UK and USA
Discussing how flasher boxes and other types of phone modifications tools don't have forensic hashes as they were not made for this work, suggested putting the evidence in a forensic image afterwords to allow for verification after the fact.
Discussion of dealing with binary dumps from flashers/jtag/chip off dumps. Common methods are just feeding it to cell phone tools that will carve for known cell patterns.
this presentation is now getting a bit trippy with perspective art/illusions
Breaking down binary patterns as a method for determining data structures
discussion on bypassing android lock phones
if usb debugging is on turned on then your standard tools an get access to the file system
if not moving on to chip off and the destructive process
Moving on to JTAG and showing the 'riff' box which supports multiple pin outs
Youtube is the database for learning how to take phones apart and find jtags
Showing how the raw dump of the jtag output is a large hex dump, showing putting it into forensic explorer again
discussing using rainbow tables of possible sha1 gesture keys to determine which pattern locked the phone
Get the pattern lock and then pull the data off the phone using an automated solution to pull the intact file system for you is his recommendation.

That was day 1, that evening was casino night which is a lot of fun. One of the best parts of PFIC is that it isn't a huge conference and at things like casino night you have a couple hours of fun to mingle with your peers and make new friends.

PFIC 2013 Slides and labs

$
0
0
Hello Reader,
       If you attended my session at PFIC this is the slides that I used today:
https://drive.google.com/file/d/0B_mjsPB8uKOAYWtsVHdJTmFlQXc/edit?usp=sharing

This is the link to download the labs I ran through:
https://drive.google.com/file/d/0B_mjsPB8uKOANXBSMDVTZExULTg/edit?usp=sharing

This is a link to sign up for the NTFS Parser:
https://docs.google.com/forms/d/1GzOMe-QHtB12ZnI4ZTjLA06DJP6ZScXngO42ZDGIpR0/viewform

This is a link to sign up for the HFS+ Parser:
https://docs.google.com/forms/d/1_Zrf7LfmnklJfJ7CteecdAiAWGdRkNp2ltqqHuYFncQ/viewform

Thanks for coming to my session, I hope it was useful!

Daily Blog #145: Forensic Lunch 11/15/13

$
0
0
Hello Reader,
     We had another Forensic Lunch this week, this time with Kristinn Gudjonsson talking about the recent developments with Plaso (the log2timeline replacement), the visualization module they made and how to submit your own plugs. We also had Ryan Benson on to talk about his tool hindsight which currently works with Chrome internet history artifacts and his research into the differences between Chrome versions 1-30. Lastly Matthew and I were on as usual talking about recent changes to ANJP and my experience at PFIC. I hope you can make time to tune in live next week at Noon central when Mari DeGrazia will be joining us! Want to come on the lunch? Email me dcowen@g-cpartners.com


Daily Blog #146: Saturday Reading 11/16/13

$
0
0
Hello Reader,
          It's Saturday and here in Utah is snowing snowing snowing! It's time for more links to make you think on our weekly reading list. So get some coffee and get comfortable because we've got some good reads this week.

1. This week's forensic lunch was pretty great, not only did we have a snowy background from my hotel window but Kristinn Gudjonsson and Ryan Benson joined us. Kristinn gave us an amazing demo of the new visualization module for Plaso and Ryan walked his through his Google Chrome internet browser research. You can watch it here, http://www.youtube.com/watch?v=frbHxkl0PKU, if you know of an easy way to turn these videos into podcasts please let me know in the comments!

2. I'm always excited when I see new content from appleexaminer.com, this entry was no exception. Ryan Kubasiak has put together a great read on OSX's default file system partition structure, formatting options and file systems supported for creation. Give it a read! http://www.appleexaminer.com/MacsAndOS/Img_Pwds/DLCS/DLCS.html

3. Interested in Bitcoin forensics? Jad over at Magnet Forensics has posted a part 2 to his article showing more artifacts relating to Bitcoin usage. This blog focuses on Bitcoin-QT a popular bitcoin client and how to find the associated artifacts, http://www.magnetforensics.com/bitcoin-forensics-part-ii-the-secret-web-strikes-back/.

4. I linked to it in the Forensic Lunch youtube description and we had a demo of it during the Forensic Lunch but I'm going to again include a link to Kristinn's blog here to emphasizes that you need to look at this visualization module they've made for Plaso http://blog.kiddaland.net/2013/11/visualize-output.html.

5. Harlan has a new post up covering tools he's interested in and more conference feedback from OSDF. It's a good read though I don't have much to add to the conversation there having not gone to OSDF. http://windowsir.blogspot.com/2013/11/tools-malware-and-more-conference.html

6. Chad Tilbury has a blog up on the Malware Analysis Quant Research Project. If you are interested in malware research you should go give it a read, http://forensicmethods.com/malware-analysis-quant-project. The post serves a good summary of what the project is and why it could be usefull to you with a link to the project itself. 

7.  X-ways has launched their own certification program called X-Pert. A time limited open book test where you have 3 hours to solve the questions asked of you from images provided. With a passing score of 80 you could be a certified X-pert in X-ways. I've been told the test is quite tough so read and prepare yourself before signing up! http://xwaysforensics.wordpress.com/2013/11/11/x-pert-certification-program/

8. Claus Valca has a pretty great post up describing what is in his IR triage drive, http://grandstreamdreams.blogspot.com/2013/11/anti-malware-response-go-kit.html. While he lists some great tools here, the most interesting thing to me was the idea of getting a USB key with a physical write switch to prevent malware from infecting his USB key. This is a great idea!

That's all for this week! Make sure to put some time on your calendar for this weeks Sunday Funday!

Daily Blog #147: Sunday Funday 11/17/13

$
0
0
Hello Reader,
        It's Sunday Funday time! After a great week in Utah I'm ready to head back to the office and get back into my casework. This week let's get back to basics with a CD burning challenge. Believe it or not people are still burning CDs and DVDs of data so its still worth knowing!

The Prize:
  • A $200 Amazon Gift Card

The Rules:
  1. You must post your answer before Monday 11/18/13 2AM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post

The Challenge:
     Your clienthas given you three CDROMs that contain their tradesecrets. They want to determine as much information as possible about the CDs to determine:
1. Which system burned them
2. What software created the CDs
3. When they were burned
4. If there were other CDs burned
5. Which user burned the CDs

The client is a small company with 5 systems of which you've been given access to all of them. Each of the 5 systems runs Windows 7.

Daily Blog #148: Sunday Funday 11/17/13 Winner!

$
0
0
Hello Reader,
        Another Sunday Funday come and gone, a new victor arises to claim the prize! This week I put out a challenge that I know we've covered in different aspects here on the blog, CD Burning artifacts, to see what you would come back with. While some of the responses covered what we've talked about here, some of you went beyond and found additional artifacts! This week the 'earliest most complete submission wins' rule came into effect.The winning answer this week from
Martijn Veken was received at 8:34am central time beating the other great submissions by hours.

The Challenge:
     Your clienthas given you three CDROMs that contain their tradesecrets. They want to determine as much information as possible about the CDs to determine:
1. Which system burned them
2. What software created the CDs
3. When they were burned
4. If there were other CDs burned
5. Which user burned the CDs

The client is a small company with 5 systems of which you've been given access to all of them. Each of the 5 systems runs Windows 7.

The Winning Answer:


Martijn Veken



1. Which system burned them
If you figured out at what time the CD’s were burned (see answer 3), check the system eventlog for event id 133, this indicates that files were burned to CD using Windows Explorer. If so, in the registry under key ”HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\CD Burning\StagingInfo”, there are keys indicating where files were staged before they were burned to the CD. You can use file system forensics to investigate what was in the folder to try to match them to the disc. You can also check the timestamp of the registry key to see at what time it was written to search more specifically.

If another tool was used, there are clues on which tool this was on the CD (see step 2). Look for indications in the prefetch, RunMRU and user assist to see if the tool has run on the system. If the tool is or used to be present, look for the temp folders or log files it produces to see if you can match it to the CD.

2. What software created the CDs
If it’s ISO9660, usually the name of the application that has created the CD is in the session start section, somewhere just after 0x8000.

3. When they were burned
If it’s ISO9660, there are a couple of timestamps indicating the time of burn in the session start section. If you have figured out on which system the discs were created, check eventlog to see if there are any events (event id 1) that indicate that the system time was changed prior to burning the disc.

4. If there were other CDs burned
If the CD’s have been burned with Windows explorer, there will be events with id 133 in the eventlog. In the registry key described in step 1 will be entries for staging folders. Examine these forensically to see if there are residues of files there.

Other burning applications also usually have a temp or staging folder for burning CD’s. You can check these folders for residues indicating that files have been burned to a CD.

5. Which user burned the CDs
In most cases, the location of the log or staging files in the users AppData folder will indicate which user created the CD’s.

If not, use the time that the CD was created to check the security event log for audit events that indicate which user was logged on to the system at the time of the creation of the CD’s. To burn a disk, a user usually needs to logon physically to the system, so look for logons of types 2 and 7 prior to burning the disc.

Make some time for next week's Sunday Funday and you too can win a prize worth researching for!

Daily Blog #149: PFIC Day 2 Notes

$
0
0
Hello Reader,
           Here are my notes from Day 2 of PFIC, this is the last of these posts as I didn't attend the day 3 session in depth as snow was falling and clients were calling. I'll be updating these posts with the slides from the relevant lectures so you can see those as well.

Day 2 - PFIC Notes

8:00am Session - Ira Winkler ' The Cyber Jungle'

Ira is very personable, I like his show as well as him
Two good stories so far, the first promoting infragard (Ira is the president of his local infragard) the other involving credit card fraud.

Why does the media ask dumb questions on tv? The guest gives them dumb questions to ask

Executives don't want to disclose and notify, this is something I also have found

Crypto Locker story time

pointing out fud about crypto locker thats out there, bad media report showing a technical person saying that firewalls, service packs and good passwords could have prevented crypto locker.

another good story, this one about a reporters experience with some attorneys

Reporters are under pressure to get multiple stories a day. This can hurt parties who can't handle the media well and be able to provide and answer questions quickly.

An interesting story about how ankle bracelets are being removed and being used to commit crimes in las vegas. Then placing their bracelet back on when they get back to their house. The bracelets are not being monitored actively and the process is broken.

Downtown streetlights in las vegas will be able to monitor audio in the future. In the near future the officers will be able to monitor this audio via iOS apps on their phones. Ira is wondering if anyone properly securing this channel, applying ISO 27k or another security standard, to prevent non LEO from listening.

Make sure to listen to cyberjungleradio.com for his weekly podcast.

10:00am session Python for web application security testing


This is a talk on writing python code for web app testing rather than popular tools.

Recommends head first programming to learn python

Showing how to build a buffer overflow script in python
All of these scripts and example app is on a dropbox shared folder for those that want to try this at home.

This isn't your normal DFIR presentation, very infosec focused. The audience seems interested though so that's good.



Showing how web apps store data and failed logins from buffer overflow attempts within a user authentication form. this is not a python tutorial but rather a show of whats capable and what it leaves behind.

Edited some code and talked about what things effect and change.

Moved on to XSS attacks
Talking about the python function htmlspecialchars to prevent xss

Moving on to how to use python to do testing and getting over common hurdles. First hurdle is basic auth

don't store credentials within code, retrieve it via prompts to the user on execution

All functions covered so far as built in python libs.
He is now going into Scapy which is a 'full featured library for preforming network operations'. Packet capture/manipulation/creation/replay lib


Live demonstration of capture, reviewing and replaying traffic with scapy
Showing the built in fuzzer within scapy
Showing how to spoof the traffic in your fuzzing with scapy

Ending now and discussing the benefits of python. Not saying not to use off the shelf tools but if you want to be able to be successful and understand more getting lower level with python directly will allow you to be more versatile.

10:30am Session - Me!

It was amazing!
It was wonderful!
Offers of free coffee were given!
I'm writing this before my session but this is how I want it to go.
In reality it went well but i had live demos fail as they are apt to do, event excel was crashing on me. Luckily I added in pre-generated results to move things forward

11:30am Session - Jake Williams IaaS forensics

IaaS is the acronym that represents most of the cloud virtualized systems we talk about, infrastructure as a service
Get a Incident Response plan and make sure it contains what to do for both your internal and externally hosted assets
You are stuck trusting the hypervisor at some base level
In a commercially hosted cloud you don't have access to the hypervisor (amazon) if you are a privately hosted cloud (your own esx server) you do have access to the hypervisor.
You need to validate that the hypervisor has not been compromised
If the hypervisor has been tampered with you need to collect additional evidence.
Jake has found an esx server where the hypervisor was compromised and thus can no longer say it doesn't happen. If the hypervisor is compromised then the attacker can control physical memory outside of the guest os and guest os artifacts.
There are hypervisor logs that you should be collecting.
This is not typical though, but you should grab the logs to be sure
The vm-support command will output a tgz file with the log and vm inventories that you need
USB over IP devices are seperately logged by the hypervisor versus USB devices physically plugged in
Don't use shared admin accounts if you want easy attribution of admin actions
Introspection isn't easily detected by the attacker and can be normally used to collect data outside of the attackers view
Inband (non hypervisor based actions) are bad because bad guys can easily detect your response effort
You can't do out of band actions on public clouds (amazon) as they don't give you hypervisor access ,so your stuck with traditional live response
Making full disk images of cloud hosts is typically difficult as your bandwidth to the site is your bottleneck.
Amazon and hopefully soon rackspace will write your data to a physical disk and mail it to you
You supply the drive and cables, they charge you $80 per disk, they will accept a shipping label so you can get it via fedex
Accounting records will be provided but they don't do Chain of Custody
The amazon feature mention called 'bulk export' is not meant as a forensic/ir service
A good alternative is to spin up a forensic/ir virtual instance so you can keep the data within the cloud and speed your investigation
Have a dongle restricted software you want to run in the cloud? Use USB over IP
The hardest part of dealing with hosted/cloud hosted systems is making sure the tech is going to follow your procedures and not shut down the system or kill the vm instance
Snapshots are great, memory is better
Public cloud (amazon, etc..) don't allow you to request physical memory out of band from the hypervisor
Public cloud snapshots are disk states but not memory states
If you capture the memory to a network share, make sure you lock down who can access them or else you may have non authorized personnel accessing secrets
You can still do CoC yourself, f-response is a great imaging solution for cloud hosts
If you get compromised public providers like amazon limit their liability in case of a compromise from their end to a refund of that months fees
If you don't want to use f-response FAU is another good tool to use for live cloud imaging, but make sure to put it over an encrypted tunnel
Protect your memory dumps, possibly encrypt them
Out of band imaging is still the best option
HP has internal resources that can out of band image a HP hosted cloud server
The issue is with imaging logical disks in non Vmware clouds is that tools often can't find the end of disk and keep writing forever
test your tools in your cloud for your IR plan to find out which ones fail silently
Hypervisor imaging is as simple as snapshotting

1:30pm Session - Memory forensics with Chad Tilbury

I should have go into this session but I was too busy talking to people through lunch. I did see the end and recognized a subset of slides from For 508 but he ended it with a nice preview of Mac and Linux memory forensics.

2:30pm Session - Recovering your costs in ediscovery

Quote from a judge on the fair housing center of southwest michigan v Hunt where the judge chastised a party for turning the litigation into a e-discovery workshop.
Nice review of which ESI costs can be recoverable, this is good information for me to advise my clients when they are not aware this exists.
If you want to recover costs you have to show detail and provide affidavits that explain why it was necessary and how  the costs break down.
Don't be vague on invoices and document your work if you want your costs to be recoverable for your client in the event they prevail
Moore v Weinstein - Prevailing party received $36,196, of which e-discovery service provider made up $22,000 of and asking for $40,000
In house work done within parties firm need to have reasonable costs and the work done must justify the rate desired to applied
A fun sidebar about thor and shield and whether working with thor would show the government endorsing a religion.
Interesting, court rulings have come out stating that native productions of documents are not recoverable costs
No cost for hosting, courts still compare data hosting to warehouses holding paper - non recoverable costs
Forensic costs within ediscovery is recoverable, forensic investigation fees of an expert witness are also recoverable separately
 Second 'geek break' discussion on how wills would effect 12 regenerations of dr who







Daily Blog #150: Forensic artifacts from renaming accounts in Windows 7

$
0
0
Hello Reader,
             One of things I enjoy is talking to other examiners out there and hearing about mysteries they find that our current knowledge base does not cover. I like hearing about these because then I can try to help by setting up a test platform to determine what is causing the underlying mystery and identify a new artifact that we can all benefit from. Such an instance happened yesterday in a discussion with an examiner, who can attribute himself if he chooses to, regarding a system he was looking at.

The system in question was running Windows 7 and had a peculiar situation occurring. When the examiner looked at the file system a user SID and name we'll call 'NameB' was associated, however the SAM and event logs made referenced to the same SID but with another user name that we'll call 'NameA'. The same SID, thus the same user, be referenced by two different names by two different sources both generated by the system itself. This was odd to say the least and the examiner pointed out that he saw 'NameB' in the 'HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList'key while he saw 'NameA' in the SAM registry located within %SYSTEM DRIVE%\Windows\System32\Config\SAM.

My hypothesis, that I thought unlikely, was that there was a bug in windows. If an account is first created and then renamed that the two registries would be out of sync. So test to this hypothesis I took one of my stock Windows 7 virtual machines and created two accounts:

1. I created two accounts
           a. standarduser - A non administrative account
           b. testuser - An administrative account
2. I then logged into each account and logged off of them to make sure all profile data was created
3. I then rebooted the system to make sure all changes were flushed to the system registries
4. I then logged in as a third user and renamed both accounts
           a. I renamed standarduser to notstandarduser
           b. I renamed testuser to NotTestUser
5. I then inspected the registries

What I found was interesting. The profile names in the Users directory and within the stub of the SAM file remained the same but within the 'V' key under SAM\Domains\Account\Users\\V I found the old and new names listed. 

'testuserNotTestUser'
'standarduserNotStandardUser'

I need to find the specification of this key so we can parse this automatically as there is no termination character between the two names.

So if in the future you have a case where your names of ownership and login don't line up check SAM\Domains\Account\Users\\V to find out if the account was renamed. Of course if you only relied on the SID you wouldn't have this problem but most of like to attribute a username as well as that's what others outside of our field would understand.

Hope this was helpful! Leave a comment if you've seen something similar or have found other changes that can cause similar behavior.

Daily Blog #151: Automating FTK Filter creation

$
0
0
Hello Reader,
           Normally I would just include something like this on a Saturday Reading link, but being that today was pretty busy and this something pretty useful to those of you using FTK I thought it was worth its own post.

If you use FTK then you know about the power of filters, much like other tools you can use filters in FTK to lock down your views to different dates, hashes, file types, paths, categories, etc... We use this feature a lot to take advantage of some of the more harder to find FTK features like LNK Metadata export. Well if you are using filters on a regular basis and using long filters to do these like only show files with a certain hash value you should check out this tool written by David Dym, read the blog post here http://redrocktx.blogspot.com/2013/11/scripting-with-ftk-filters.html.

You might know David Dym as the author of shadowkit but I know him as a fellow employee of G-C Partners where we've been using this tool on a number of cases. In the example shown in his blog entry he is getting FTK to show only those itemids listed. We do this a lot with attorneys providing them a spreadsheet of files with itemids included, telling them to mark which ones they are interested in. Then we can just export out those itemids to FTK in a filter form and easily export out the data they want.

That's just one example but you can extend and automate any number of large and long filters this way and then just import them into your case.

Tomorrow is the forensic lunch!
Viewing all 877 articles
Browse latest View live