Quantcast
Channel: Hacking Exposed Computer Forensics Blog
Viewing all 877 articles
Browse latest View live

Daily Blog #373: Automating DFIR with dfVFS part 3

$
0
0
Hello Reader,
           In our last post I expanded on the concept of path specification objects. Now let's expand the support of our dfVFS code to go beyond just forensic images and known virtual drives to live disks and raw images.

Why is this not supported with the same function call you ask? Live disks and raw images do not have any magic headers that dfVFS can parse and know what it is dealing with. So instead we need to add some conditional logic to help it know when to test if what we are working with is an image or a raw disk.

First as we did last time let's see what the code looks like now:
import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
## Adding Resolver
from dfvfs.resolver import resolver
## Adding raw support
from dfvfs.lib import raw

source_path="dfr-16-ntfs.dd"

path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
path_spec)

if len(type_indicators) > 1:
raise RuntimeError((
u'Unsupported source: {0:s} found more than one storage media '
u'image types.').format(source_path))

if len(type_indicators) == 1:
path_spec = path_spec_factory.Factory.NewPathSpec(
type_indicators[0], parent=path_spec)

if not type_indicators:
# The RAW storage media image type cannot be detected based on
# a signature so we try to detect it based on common file naming
# schemas.
file_system = resolver.Resolver.OpenFileSystem(path_spec)
raw_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_RAW, parent=path_spec)

glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
if glob_results:
path_spec = raw_path_spec

volume_system_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
parent=path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_system_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
volume_identifier = getattr(volume, 'identifier', None)
if volume_identifier:
volume_identifiers.append(volume_identifier)

print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
volume = volume_system.GetVolumeByIdentifier(volume_identifier)
if not volume:
raise RuntimeError(
u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

volume_extent = volume.extents[0]
print(
u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
volume.identifier, volume_extent.offset, volume_extent.size))

print(u'')


The first thing is different is two more helper functions from dfVFS being imported:
## Adding Resolver
from dfvfs.resolver import resolver
## Adding raw support
from dfvfs.lib import raw

The first one, resolver, is a helper function that attempts to resolve path specification
objects to file system objects. You might remember that in pytsk the first thing we did
after getting a volume object was to get a file system object. Resolver is doing this for us.

The second is 'raw'. Raw is the class that supports raw images in dfVFS. It defines the
rawGlobalPathSpec function that creates a special path specification object for raw images.

Next we are changing what image we are working with to a raw image:
source_path="dfr-16-ntfs.dd"

We are now ready to deal with a raw image aka a dd image or live disk/partition.

First we are going to change the conditional logic around our type indicator helper function call.
In the first version of the script we knew the type of image we were dealing with so we didn't bother
testing what the type_indicator function returned. Now we could be dealing with multiple types of
images (forensic image, raw image, unkown types) so we need to put in some conditional testing to deal with it.

if len(type_indicators) > 1: 
raise RuntimeError((
u'Unsupported source: {0:s} found more than one storage media '
u'image types.').format(source_path))

if len(type_indicators) == 1:
path_spec = path_spec_factory.Factory.NewPathSpec(
type_indicators[0], parent=path_spec)

The first check we do with what is returned into type_indicators is see is more than one type has
been identified. Currently dfVFS only supports one type of image within a single file. I'm not quite
sure when this would happen but its prudent to check for. If this condition were to occur we are calling the built in raise function to call a 'RunTimeError' printing a message to the user that we don't support this type of media.

The second check is what we saw in the first example, there is one known type of media stored within this image. You can tell we are checking for 1 type because we are calling the length function on the type_indicators list object and checking to see if the length is 1.We are going to use what is returned ([0] refers to first element returned in the list contained within type_indicators) and create our path_spec object for the image. One thing does change here and that is we are no longer giving what is returned from the NewPathSpec function into a new variable. Instead we are taking advantage of the layering described in the prior post to store the new object into the same variable name knowing that the prior object has been layered in with the parent being set to path_spec.

Only two more changes and our script is done. Next we need to check to see if there are no known media format stored in type_indicators. We do that by checking to see if nothing is stored inside of the variable type_indicators using the if not operator. This basically says if the type_indicator variable is null (nothing was returned from the function called to populate it) run the following code.

if not type_indicators:
# The RAW storage media image type cannot be detected based on
# a signature so we try to detect it based on common file naming
# schemas.
file_system = resolver.Resolver.OpenFileSystem(path_spec)
raw_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_RAW, parent=path_spec)



There are two things that code is going to do if there is no type returned, indicating this is possibly a raw image. The first is to call the resolver helper class function OpenFileSystem with the path_spec object we have made. If this is is successful that we are creating a new path specification object and manually setting the type of the object we are layering on to be TYPE_INDICATOR_RAW or a raw image.

Last change we make is taking that new raw image path specification and making it work with the other dfVFS functions that may not explicitly work with a raw image object. We do that be calling the raw function's RawGlobPathSpec function and passing it two objects. The first is the file system object we made in the section just above and the second is the raw_path_spec object we made. The RawGlobPathSpec object is then going to bundle those objects up and if it is successful it will return a new path specification object that the rest of the library will work with.

  glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
if glob_results:
path_spec = raw_path_spec

We then test the glob_results variable to make sure something was stored within it, a sign it ran successfully. If there is in fact an object contained within it we assign it to our path_spec variable.

That's it!

After running the script this should be what you see:

The following partitions were found:
IdentifierOffsetSize
p165536 (0x00010000)314572800
You can download the image I'm testing with here: http://www.cfreds.nist.gov/dfr-images/dfr-16-ntfs.dd.bz2

You can download the source code for this example from GitHub here: https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv2.py

Tomorrow we continue to add more functionality!

Daily Blog #374: Automating DFIR with dfVFS part 4

$
0
0
Hello Reader,
            In our last entry in this series we took our partition listing script and added support for raw images. Now our simple script should be able to work with forensic images, virtual disks, raw images and live disks.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


Now that we have that working let's actually get it to do something useful, like extract a file.

First let's look at the code now:

import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
from dfvfs.resolver import resolver
from dfvfs.lib import raw

source_path="stage2.vhd"

path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
path_spec)

if len(type_indicators) > 1:
raise RuntimeError((
u'Unsupported source: {0:s} found more than one storage media '
u'image types.').format(source_path))

if len(type_indicators) == 1:
path_spec = path_spec_factory.Factory.NewPathSpec(
type_indicators[0], parent=path_spec)

if not type_indicators:
# The RAW storage media image type cannot be detected based on
# a signature so we try to detect it based on common file naming
# schemas.
file_system = resolver.Resolver.OpenFileSystem(path_spec)
raw_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_RAW, parent=path_spec)

glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
if glob_results:
path_spec = raw_path_spec

volume_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
parent=path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
volume_identifier = getattr(volume, 'identifier', None)
if volume_identifier:
volume_identifiers.append(volume_identifier)

print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
volume = volume_system.GetVolumeByIdentifier(volume_identifier)
if not volume:
raise RuntimeError(
u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

volume_extent = volume.extents[0]
print(
u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
volume.identifier, volume_extent.offset, volume_extent.size))

print(u'')

path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/p1',
parent=path_spec)

mft_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_TSK, location=u'/$MFT',
parent=path_spec)

file_entry = resolver.Resolver.OpenFileEntry(mft_path_spec)


stat_object = file_entry.GetStat()

print(u'Inode: {0:d}'.format(stat_object.ino))
print(u'Inode: {0:s}'.format(file_entry.name))
extractFile = open(file_entry.name,'wb')
file_object = file_entry.GetFileObject()

data = file_object.read(4096)
while data:
extractFile.write(data)
data = file_object.read(4096)

extractFile.close
file_object.close()

The first thing I changed was what image I'm working from back to stage2.vhd.

source_path="stage2.vhd"

 At this point though you should be able to pass it any type of supported image.

Next after the code we first wrote to list out the partitions within an image we added a new path specification layer to make an object that points to the first partition within the image.

path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/p1',
        parent=path_spec)
You can see we are using the type of TSK_PARTITION again because we know this is a partition but the location has changed from the prior type we made a parition path spec object. This is because our prior object pointed to the root of the image so we could iterate through the partitions and the new object is referencing just the 1st partition.

Next we make another path specification object that build on the partition type object.

mft_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK, location=u'/$MFT',
        parent=path_spec)

Here we are creating a TSK object and telling it that we want it to point to the file $MFT at the root of the file system. Notice we didn't have to tell it the kind of file system, offsets to where it begins or any other data. The resolver and analyzer helper classes within dfVFS will figure all out of that out for us, if it can. In tomorrows post we will put in some more conditional code to detect when it can in fact not do that for us.

So now that we have a path spec object was a reference to a file we want to work with let's get an object for that file.

file_entry = resolver.Resolver.OpenFileEntry(mft_path_spec)

The resolver helper class OpenFileEntry function takes the path spec object we made that points to the $MFT and if it can access it will return an object that references it.

Next we are going to gather some data about the file we are accessing.

stat_object = file_entry.GetStat()

First we used the GetStat function available from the file entry object to return information about the file into a new object called stat object. This is similar to running the stat command on a file.

Next we are going to print what I'm refering to below as the Inode number:
print(u'Inode: {0:d}'.format(stat_object.ino))

MFT's don't have Inodes this is actually the MFT record number but the concept is the same. We are calling the stat_object property ino to access the mft record number. You could also access the size of the file, dates associated and other data but this is a good starting place.

Next we want to print the name of the file we are accessing.
print(u'Inode: {0:s}'.format(file_entry.name))


The file_entry object property contains the name. This is much easier than with pyTsk where we had to walk a meta sub object property structure to get the file name out.

Now we need to open a file handle to where we want to put the MFT data out to

extractFile = open(file_entry.name,'wb')

Notice two things. One we are using the file_entry.name property directly in the open file handle call, this means our extracted file will have the same name as the file in the image. Two we are passing in the options wb which means that the file handle can be written to, and when it is written to should be treated as a binary file. This is important in Windows systems as when you write out binary data any new lines could be interpreted unless you pass in the binary mode flag.

Now we need to interact with not just the properties of the file in the image, but what data its actually storing

file_object = file_entry.GetFileObject()

We do that by calling the GetFileObject function from the file_entry object. This is giving us a file object just like extractFile that normal python functions can read from. The file handle is being stored in the variable file_object.

Now we need to read the data from the file in the image and then write it out to a file on the disk.

data = file_object.read(4096)
while data:
          extractFile.write(data)
          data = file_object.read(4096)

First we need to read from the file handle we opened to the image. We are going to do that for 4k of data and then enter a while loop. The while loop is saying as long as there is data being read from the read call to file_object to keep reading 4k chunks. When we reach the end of the file our data variable will contain a null return and the while loop will stop iterating.

While there is data the write function on the extractFile handle will write the data we read and then we will read the next 4k chunk and iterate through the loop again.

Lastly for good measure we are going to close the handle to both file within the image and the file we are writing to on our local disk.

extractFile.close
file_object.close()

And that's it!

In future posts we are going to access volume shadow copies, take command line options, iterate through multiple partitions and directories and add a GUI. Lot's to do but we will do it one piece at a time.

You can download this posts code here on GitHub: https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv3.py

Daily Blog #375: Video Blog showing how to verify and test your dfVFS install

$
0
0
Hello Reader,
        This is a first for me, I've created a video blog today to show how to verify and test that your dfVFS installation was successful in Windows.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


Watch it here: https://youtu.be/GI8tbi74DFY

or below:

Daily Blog #376: Saturday Reading 4/16/16

$
0
0
Hello Reader,

          It's Saturday!  Soccer Games, Birthday Parties and forensics oh my! That is my weekend, how's yous? If its raining where you are and the kids are going nuts here are some good links to distract you.

1. Diider Stevens posted an index of all the posts he's made in March, https://blog.didierstevens.com/2016/04/17/overview-of-content-published-in-march/. If you are at all interested in malicious document deconstruction and reverse engineer it's worth your time to read. 

2. If you've done any work on ransomware and other drive by malware deployments this article by Brian Krebs on the the sentencing of the black hole kit author is worth a read, http://krebsonsecurity.com/2016/04/blackhole-exploit-kit-author-gets-8-years/

3. Harlan has a new blog up this week with some links to various incident response articles he's found interesting, http://windowsir.blogspot.com/2016/04/links.html. This includes a link to the newly published 2nd edition of Windows Registry Forensics!

4. Mary Ellen has a post up with a presentation she made regarding the analysis of phishing attacks, http://manhattanmennonite.blogspot.com/2016/04/gone-phishing.html, The presentation also links to a Malware lab. Maybe this will see more posts from Mary Ellen.

5. Adam over at Hexcorn has a very interesting write up on EICAR, http://www.hexacorn.com/blog/2016/04/10/a-few-things-about-eicar-that-you-may-be-not-aware-of/. I wasn't aware of EICAR until Adam posted about it and found the whole read fascinating. EICAR is apparently a standard file created to allow anti virus developers test their own software and as Adam discusses others have made their own variations. 

6. In a bit of inception posting, Random Access has a weekly reading list of his own on his blog. This is his post from 4/10/16, https://thisweekin4n6.wordpress.com/2016/04/10/week-14-2016/. He does a very good job covering things I miss and frankly I should just be copying and pasting his posts here, but I think that's looked down on. 

So Phil, if you are reading this. Do you want to post here on Saturdays?

That's all for this week! Did I miss something? Post a link to a blog or site I need to add to my feedly below.

Daily Blog #377: Sunday Funday 4/17/16

$
0
0
Hello Reader,
              If  you have been following the blow the last two weeks you would have seen its been all about dfVFS. Phil aka Random Access posted something I was thinking about on his blog, https://thisweekin4n6.wordpress.com, that I thought was worthy of a Sunday Funday challenge. In short Phil saw that I posted a video regarding how to verify dfVFS was installed correctly and there is a whole post just on installing it and mentioned that someone should automate this process. I agree Phil, and now I turn it over to you Reader, let's try out your scripting skills in this weeks Sunday Funday Challenge. 

The Prize:
$200 Amazon Giftcard

The Rules:

  1. You must post your answer before Monday 4/18/16 3PM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com. Please state in your email if you would like to be anonymous or not if you win.
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post



The Challenge:
Read the following blogpost: http://www.hecfblog.com/2015/12/how-to-install-dfvfs-on-windows-without.html and then write a script in your choice of scripting language that will pull down and install those packages for a user. Second the script should then run the dfVFS testing script shown in this video http://www.hecfblog.com/2016/04/daily-blog-375-video-blog-showing-how.html to validate the install. 

Daily Blog #378: Automating DFIR with dfVFS part 5

$
0
0
Hello Reader,

Wondering where yesterdays post is? Well, there was no winner of last weekends Sunday Funday.
That's ok though because I am going to post the same challenge this Sunday so you have a whole week to figure it out!

-- Now back to our regularly scheduled series --

              I use Komodo from Activestate as my IDE of choice when writing perl and python. I bring this up because one of the things I really like about it is the debugger it comes with that allows you to view all of the objects you have made and their current assignments. I was thinking about the layer cake example I crudely drew in ascii in a prior post when I realized I could show this much better from the Activestate Debugger.

So here is what the path spec object we made to access the $MFT in a VHD looks like.


I've underlined in red the important things to draw your attention to when you are trying to understand how that file path specification object we built can access the MFT and all the other layers involved.

So if you look you can see from the top down its:

  • TSK Type with a Location of /$MFT
    • With a parent of TSK Partition type with a location of /p1
      • With a parent of VHDI type 
        • With a parent of OS type with a location of the full path to where the vhd I'm working with sits.

Let's look at the same object with with an e01 loaded.


Notice what I highlighted, the image type has changed from VHDI to EWF. Otherwise the object, its properties and the methods are the same. 

Let's do this one more time to really reinforce this with a raw/dd image.


Everything else is the same, except for the type changing to RAW. 

So no matter what type of image we are working with dfVFS allows us to build an object in layers that permits the code that follows not to have to worry about the code behind. It is normalizing all the different image types access libraries so we can prevent things like the work around we do in pytsk.

Tomorrow, more code!

Daily Blog #379: Automating DFIR with dfVFS part 6

$
0
0
Hello Reader,
         It's time to continue our series by iterating through all the partitions within a disk or image, instead of just hard coding the one. To start with you'll need another image, one that not only has more than one partition but also has shadow copies for us to interact with next.

You can download the image here:
https://mega.nz/#!L45SRYpR!yl8zDOi7J7koqeGnFEhYV-_75jkVtI2CTrr14PqofBw


If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


First let's look at the code now:

import sys
import logging

from dfvfs.analyzer import analyzer
from dfvfs.lib import definitions
from dfvfs.path import factory as path_spec_factory
from dfvfs.volume import tsk_volume_system
from dfvfs.resolver import resolver
from dfvfs.lib import raw

source_path="Windows 7 Professional SP1 x86 Suspect.vhd"

path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_OS, location=source_path)

type_indicators = analyzer.Analyzer.GetStorageMediaImageTypeIndicators(
path_spec)

if len(type_indicators) > 1:
raise RuntimeError((
u'Unsupported source: {0:s} found more than one storage media '
u'image types.').format(source_path))

if len(type_indicators) == 1:
path_spec = path_spec_factory.Factory.NewPathSpec(
type_indicators[0], parent=path_spec)

if not type_indicators:
# The RAW storage media image type cannot be detected based on
# a signature so we try to detect it based on common file naming
# schemas.
file_system = resolver.Resolver.OpenFileSystem(path_spec)
raw_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_RAW, parent=path_spec)

glob_results = raw.RawGlobPathSpec(file_system, raw_path_spec)
if glob_results:
path_spec = raw_path_spec

volume_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/',
parent=path_spec)

volume_system = tsk_volume_system.TSKVolumeSystem()
volume_system.Open(volume_path_spec)

volume_identifiers = []
for volume in volume_system.volumes:
volume_identifier = getattr(volume, 'identifier', None)
if volume_identifier:
volume_identifiers.append(volume_identifier)

print(u'The following partitions were found:')
print(u'Identifier\tOffset\t\t\tSize')

for volume_identifier in sorted(volume_identifiers):
volume = volume_system.GetVolumeByIdentifier(volume_identifier)
if not volume:
raise RuntimeError(
u'Volume missing for identifier: {0:s}.'.format(volume_identifier))

volume_extent = volume.extents[0]
print(
u'{0:s}\t\t{1:d} (0x{1:08x})\t{2:d}'.format(
volume.identifier, volume_extent.offset, volume_extent.size))

volume_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/'+volume_identifier,
parent=path_spec)

mft_path_spec = path_spec_factory.Factory.NewPathSpec(
definitions.TYPE_INDICATOR_TSK, location=u'/$MFT',
parent=volume_path_spec)

file_entry = resolver.Resolver.OpenFileEntry(mft_path_spec)


stat_object = file_entry.GetStat()

print(u'Inode: {0:d}'.format(stat_object.ino))
print(u'Inode: {0:s}'.format(file_entry.name))
outFile = volume_identifier+file_entry.name
extractFile = open(outFile,'wb')
file_object = file_entry.GetFileObject()

data = file_object.read(4096)
while data:
extractFile.write(data)
data = file_object.read(4096)

extractFile.close
file_object.close()
volume_path_spec=""
mft_path_spec=""

Believe it or not we didn't have to change much here to get it to go from looking at one partition and extracting the $MFT to extracting it from all the partitions. What we had to do was four things.

1. We moved our file extraction code over by one indent allowing it to execute as part of the for loop we first wrote to print out the list of partitions in an image. Remember that in Python we don't use braces to determine how the code will be executed, its all indentation that decides how the code logic will be read and followed.
2. Next we changed the location where our volume path specification object is set to from a hard coded /p1 to whatever volume identifier we are currently looking at in the for loop.

 volume_path_spec = path_spec_factory.Factory.NewPathSpec(
        definitions.TYPE_INDICATOR_TSK_PARTITION, location=u'/'+volume_identifier,
        parent=path_spec)

You can see that the location variable is now set to u'/' being appended to the volume_identifier variable. This would be resolved to /p1, /p2, etc.. as many partitions as we have on the image.

3. Now that we are going to extracting this file from multiple partitions we don't want to overwrite the file we previously extracted so we need to make the file name unique. We do that by appending the partition number to the file name.

  outFile = volume_identifier+file_entry.name
  extractFile = open(outFile,'wb')

This results in a file named p1$MFT, p2$MFT, and so on. To accomplish this we make a new variable called outfile which is set to the partition number (volume_identifier) appended with the file name (file_entry.name). Then we pass that the open file handle argument we wrote before.

4. One last simple change.

volume_path_spec=""
mft_path_spec=""

We are setting our partition and file path spec objects back to null. Why? Because if not
they are globally set and will just keep appending on to the prior object. That will 
result in very funny errors.

That's it! No more code changes. 

You can get the code from Github: 
https://github.com/dlcowen/dfirwizard/blob/master/dfvfsWizardv4.py


In the next post we will be iterating through shadow copies!

Daily Blog #380: National CCDC 2016

$
0
0
Hello Reader,
           I'm in San Antonio for the National Collegiate Cyber Defense Competition which starts at 10am CST 4/22/16. If you didn't know I lead the red team here at Nationals where the top 10 college teams in the country come and find out who does the best defending their network while completing business objectives.

I'm hoping to follow up this post with some videos and links to what happens tomorrow, in the mean time make sure to follow #CCDC or #NCCDC on twitter to watch some of our funny business in real time. 

Daily Blog #381 National CCDC Redteam Debrief

$
0
0
Hello Reader,
     The 11th year of the National Collegiate Cyber Defense Competition has ended, congratulations to the University of Central Florida for a their third consecutive win. I hope you make it back next year for another test of your schools program and ability to transfer knowledge to new generations of blue teams.

If you want to show your support for my efforts, there is an easy way to do that. 

Vote for me for Digital Forensic Investigator of the Year here: https://forensic4cast.com/forensic-4cast-awards/


However the team that won over the Redteam was the University of Tulsa who came with a sense of humor. Behold their Hat and Badges:


Also you have to check out the player cards they made here:
https://engineering.utulsa.edu/news/tu-cyber-security-expert-creates-trading-cards-collegiate-cyber-defense-competition/

Here is an my favorite:


You can download my Redteam debrief here:
https://drive.google.com/file/d/0B_mjsPB8uKOAcUQtOThUNUpTZ0k/view?usp=sharing

Building your own travel sized virtual lab with ESXi and the Intel SkullCanyon NUC

$
0
0
Hello Reader,
          It's been awhile and I know that, sorry for not writing sooner but to quote Ferris Bueller

"Life moves pretty fast. If you don't stop and look around once in a while, you could miss it."

So while I've worked on a variety of cases, projects and new artifacts to share I've neglected the blog. For those of you who have been watching/listening you know I've kept up the Forensic Lunch videocast/podcast but to be fair the Blog is my first child and I've left it idle for too long.

Speaking of the Forensic Lunch if you watched this episode:
https://www.youtube.com/watch?v=Ru8fLioIVlA

You would have seen me talk about building my own portable cloud for lab testing and research. People seem to have received this very well and I've thoroughly enjoyed using it! So to that end I thought I would detail out how I set this up in case you wanted to do the same.

Step 1. Make an account on vmware.com (https://my.vmware.com/web/vmware/registration)

Step 2. Using chrome, not sure why I had some errors in firefox but I did, go to this page to register for the free version of ESXi. (Note this is the free version of ESXi that will generate a license key for life, the other version will expire after 60 days )
https://my.vmware.com/en/group/vmware/evalcenter?p=free-esxi6

Step 3. Make a note of your license key as seen in the picture below, you'll want to copy and paste this and keep it as it won't show up as a license key associated with your MyVmware account


Step 4. Click to download the product named "ESXi ISO image (Includes VMware Tools)". You could also download the vsphere client at this point or you can grab it from a link emebdded within the ESXI homepage when you get it installed. 

Step 5. After downloading the ISO you will need to put it onto some form of bootable media for it to install onto your Intel Skull Canyon NUC as it has no optical drive of its own. I choose to do this to a USB thumb drive. To do turn the ISO into a successfully booting USB drive I used rufus and you can to.

Step 5a. Download Rufus: https://rufus.akeo.ie/downloads/rufus-2.11.exe
Step 5b. Execute Rufus
Step 5c. Configure Rufus to look something like what I have below. Where Device is the USB thumb drive you have plugged in and under ISO image I've selected the ESXi iso file I downloaded and click start.






Step 6. With your ESXi media now on a bootable USB drive you are ready to move on to the Intel Skull Canyon NUC itself. Start by actually getting one! I got mine at Fry's Electronics, Microcenter also carries them and they both price match Amazon now. If you wanted to get it online I would recommend Amazon to do so and you can support a good charity while doing so by using smile.amazon.com. I support the Girl Scouts of Northeast Texas with my purchases.

Link to Intel Skull Canyon NUC:
https://smile.amazon.com/Intel-NUC-Kit-NUC6i7KYK-Mini/dp/B01DJ9XS52/ref=sr_1_1?ie=UTF8&qid=1474577754&sr=8-1&keywords=skull+canyon

The NUC comes with a processor, case, power supply and fans all built in or in the box. What you will need to provide is the RAM and storage.


Storage
I used the Samsung 950 Pro Series 512GB NVMe M.2 drive, the NUC can actually fit two of these but one has been enough so far for my initial testing.

Link to storage drive:
https://smile.amazon.com/Samsung-950-PRO-Internal-MZ-V5P512BW/dp/B01639694M/ref=pd_bxgy_147_img_2?ie=UTF8&psc=1&refRID=7N9JV1CX8FJQ4Y3JT858

RAM
For RAM I used Kingston HyperX with two 16GB sticks to get the full 32GB of RAM this unit is capable of.
Link to the RAM here:
https://smile.amazon.com/Kingston-Technology-2133MHz-HX421S13IBK2-32/dp/B01BNJL96A/ref=pd_sim_147_2?ie=UTF8&pd_rd_i=B01BNJL96A&pd_rd_r=7N9JV1CX8FJQ4Y3JT858&pd_rd_w=eFJsO&pd_rd_wg=HPiy3&psc=1&refRID=7N9JV1CX8FJQ4Y3JT858

You can use other storage and RAM of course, I used these because I wanted the speed of NVMe M.2 (2GB/sec reads and 1.5GB/sec writes) with all the memory I could get to feed the VMs that will be running on the NUC.

Step 7. Put the storage and RAM into the NUC, plug it in to the wall, attach a USB keyboard and mouse, attach a monitor and boot up to the Intel Visual Bios. You will need to disable the Thunderbolt controller on the NUC before installing ESXi, you can re-enable it after you are done installing ESXi.

To see what to click specifically in order to do this go here:
http://www.virten.net/2016/05/esxi-installation-on-nuc6i7kyk-fails-with-fatal-error-10-out-of-resources/

Step 8. Pop in the bootable USB drive and install ESXI.

You are now ready to start loading ISO's and VMs into your datastore and in the next blog post I'll show how to create an isolated virtual network to put them on. 

SOPs in DFIR

$
0
0
Hello Reader,
      It's been awhile! Sorry I haven't written sooner. Things are great here at camp aka G-C Partners where the nerds run the show. 2 years ago or so I got lucky enough to work with on our favorite customers in generating some standard operating procedures for their DFIR lab. While we list forensic lab consulting as a service on our website we don't get to engage in helping other labs improve as often as I'd like.

It may sound like a bad idea for a consulting company to help a potential client get better at what we both do, some may see this as a way of preventing future work for yourself. This in my view is short sighted and in my world view of DFIR and especially in the court testimony/expert witness world the better the internal lab is the better my life is when they decide to litigate a case. If I've helped you get on the same standard of work as my lab then I can spend my time using the prior work as a cheat sheet to validate faster and then looking for the newest techniques or research that could potentially find additional evidence.

Now beyond the business aspects of helping another lab improve I want to talk about the general first reactions that I get and used to have myself regarding making SOPs for what we do. SOP or Standard Operating Procedures are a good thing (TM) as they help set basic standards in methodology, quality and templated output without the expense of creative solutions... if they are done right.

When I was still doing penetration testing work I was first asked to try and make SOPs for my job. I balked at the idea stating that you can't proceduralize my work, there are too many variables! While this was true for the entire workflow what I didn't want to admit at the time is that there we were several parts of my normal work that were ripe for procedures to be created. I didn't want to admit it because that would mean additional work for myself in creating documentation I saw as something that would slow down my job. When I started doing forensic work in 1999 I was asked the same question for my DFIR work and again pushed back stating there were too many variables in an investigation to try to turn it into a playbook.

I was wrong then and you may be wrong now. The first thing you have to admit to yourself, and your coworkers, is that regardless of the outliers in our work there are certain things we always do. Creating these SOPs will let new and existing team members do more consistent work and create less work for yourself in continually repeating yourself on what to do and what they should give you at the end of it. This kind of SOP will work for you if you create a SOP that works more like a framework than a restrictive set of steps.

For examples of how SWGDE makes SOPs for DFIR look here:
https://www.swgde.org/documents/Current%20Documents/SWGDE%20QAM%20and%20SOP%20Manuals/SWGDE%20Model%20SOP%20for%20Computer%20Forensics

Read on to see what I do.

What you want:


  • You want the SOP for a particular task to work more like stored knowledge 
  • You want to explain what that particular task can and can't do to prevent confusion or misinformation
  • You want to establish what to check for to make sure everything ran correctly to catch errors early and often
  • You want to provide alternative tools or techniques that work in case your preferred tool or method fails
  • You want to link to tool documentation or blogs/articles that give more documentation on whats being done in case people want to know more
  • You want to establish a minimum set of requirements for what the output is to prevent work being done twice
  • You want to store these somewhere accessible and easy to access like a wiki/sharepoint/dropbox/google doc so people can easily refer to them and more importantly you can easily refer people to it
  • You want to build internal training that works to teach people how to perform tasks with the SOPs so they become part of your day to day work not 'that thing' that no one wants to use
  • You want your team to be part of making the SOP more helpful while keeping to these guidelines of simplicity and usability



What you don't want:

  • You don't want to create a step by step process for someone to follow, that's not a SOP those are instructions
  • You don't want to create a checklist of things to do, if you do that people will only follow the checklist and feel confined to it
  • You don't to use words like must/shall/always unless you really mean it, whatever you write in your SOP will be used to judge your work. Keep the language flexible and open so they serve as a guideline that you as an expert can navigate around when needed
  • You don't want to put a specific persons name or email address in, people move around and your SOPs will quickly fall out of date
  • You don't want to update your SOPs every time a tool version changes, so make sure you are not making them so specific that change of one choice or parameter breaks them
  • You don't want to make these by committee, just assign them to people with an example SOP you've already made and then show them to team for approval/changes

In the end what you are aiming for is to have a series of building blocks of different common tasks you have in your investigative procedures that you can chain together for different kinds of cases.

If that's hard to visualize lets go through an example SOP for prefetch files and how it fits into a larger case flow. In this example I am going to show the type of data I would put into the framework, not the the actual SOP I would write. 

Why not just give you my SOP for prefetch files? My SOP will be different from yours. We have an internal tool for preftech parsing, I want different output to feed our internal correlation system and I likely want more data than most people think is normal. 



Example framework for Prefetch files:

Requirements per SOP:

  •     Scope
    •  This SOP covers parsing Prefetch files on Windows XP-8.1. 
  • Limitations
    • The prefetch subsystem may be disabled if the system you are examining is running an SSD and is Windows 7 or newer or if running a server version of Windows. The prefetch directory only stores prefetch files for the last 128 executables executed on Windows XP - 7. You will need to recover shadow copies of preftech files and carve for prefetch files to find all possible prefetch entries on an image.
  •  Procedure
    • Extract prefetch files
    • Parse prefetch files with our preferred tool, gcprefetchparser.py
  •   Expected output
    • A json file for each prefetch
  •   Template for reporting
    • Excel spreadsheet, see prefetch.xlsx for an example 
  •  QA steps
    • Validate that timestamps are being correctly formatted in excel
    • If there are no prefetch files determine if a SSD was present or if anti forensics occurred
  •  Troubleshooting
    • Make sure timestamps look correct
    • Validate that paths and executable names fit within columns
    • Make sure the number of prefetch files present equal to the number of files you parsed
    • Remember for windows 10 that the prefetch format changed, you must use the Win10 Prefetch SOP
    • Remember that Windows Server OS's do not have Prefetch on my default
  • Alternative tools 
    • Tzworks pf 
  • Next Steps
    • Shimcache parsing
  • References
    • Links to prefetch format from metz 

Now you have a building block SOP for just parsing PE files. Now you can create workflows that combine a series of SOPs that guides an examiner without locking them into a series of steps. Here is an example for a malware case. 

Malware workflow:
  1. Identify source of alert and indicators to look for
  2. Follow triage SOP
  3. Volatility processes SOP
  4. Prefetch SOP
  5. Shimcache SOP
  6. MFT SOP
  7. Userassist SOP
  8. timeline SOP
  9. Review prior reports to find likely malicious executable
You can then reuse the same SOPs for other workflows that range from intrusions to intellectual property cases. The goal is not to document how to do an entire case but to standardize and improve the parts you do over and over again for every case with an eye on automating and eliminating errors to make you job easier/better and your teams work better. 

Now I don't normally do this but if you are looking at this and saying to yourself, I don't have the time or resources to do this but I do have the budget for help then reach out:

info@g-cpartners.com or me specifically dcowen@g-cpartners.com

We do some really cool stuff and we can help you do cool stuff as well. I try to keep my blogs technical and factual but I feel that sometimes I hold back on talking about what we do to the detriment of you the reader. So to be specific for customers who do engage us to help them improve their labs/teams we :

1. Provide customized internal training on SOPs, Triforce, internal tools, advanced topics
2. Create custom tools for use in your environment to automate correlation and workflows
3. Create SOPs and processes around your work
4. Provide Triforce and internal G-C tool site licenses and support
5. Do internal table top scenarios 
6. Do report and case validation to check on how your team is performing and what you could do better
7. Build out GRR servers and work with your team to teach you how to use it and look for funny business aka threat hunting
8. Act as a third party to evaluate vendors trying to sell you DFIR solutions


Ok there I'm done talking about what we do, hopefully this helps someone. I'll be posting again soon about my new forensic workstation and hopefully more posts in the near future.


DFIR Exposed #1: The crime of silence

$
0
0
Hello Reader,
          I've often been told I should commit to writing some of the stories of the cases we've worked as to not forget them. I've been told that I should write a book of them, and maybe some day I will. Until then I wanted to share some of cases we've worked where things went outside the norm to help you be aware of not what usually happens, but what happens with humans get involved.

Our story begins...
It's early January, the Christmas rush has just ended and my client reaches out to me stating.

"Hey Dave, Our customer has suffered a breach and credit cards have been sent to an email address in Russia"

No problem, this is unfortunately fairly common so I respond we can meet the client as soon as they are ready. After contracts are signed we are informed there are two locations we need to visit.

1. The datacenter where the servers affected are hosted which have not been preserved yet
2. The offices where the developers worked who noticed the breach

Now at this point you are saying, Dave... you do IR work? You don't talk about that much. No, we don't talk about it much, for a reason. We do IR work through attorneys mainly to preserve the privilege and I've always been worried that making that a public part of our offering would effect the DF part of our services as my people would be flying around the country.

BTW Did you know that IR investigations lead by an attorney are considered work product by case law on a case I'm happy to say I worked on? Read more here: https://www.orrick.com/Insights/2015/04/Court-Says-Cyber-Forensics-Covered-by-Legal-Privilege

So we send out one examiner to the datacenter while we gather information from the developers. Now you may be wondering why we were talking to the developers and not the security staff. It was the developers who found the intrusion after trying to track down an error in their code and comparing the production code to their checked in repository. Once they compared it they found a change in their shopping cart that showed the form submitted with the payment instructions was being processed while also being emailed to a russian hosted email address.

The developers claimed this was their first knowledge of any change and company management was quite upset with the datacenter as they were supposed to provide security in their view of the hosting contracts signed. Ideas of liability and litigation against the hosting provider were floating around and I was put on notice to see what existed to support that.

It was then that I got a call from my examiner who went to the datacenter, he let me know that one of the employees of the hosting company handed him a thumbdrive while he was imaging the systems saying only:

 'You'll want to read this'

You know what? He was right!

On the thumbdrive was a transcript of a ticket that was opened by the hosting companies SOC. In the transcript it was revealed that a month earlier the SOC staff was informing the same developers who claimed to have no prior knowledge of an intrusion that a foreign ip had logged into their VPS as root ... and that probably wasn't a good thing.

I called the attorney right away and let her know she likely needed to switch her focus from possible litigation against the hosting provider and to an internal investigation to find out what actually happened. Of course we still needed to finish our investigation of the compromise itself to make sure the damage was understood from a notification perspective.

Step 1. Analyzing the compromised server

Luckily for us the SOC ticket showed us when the attacker had first logged in as the root account which we were able to verify through the carved syslog files. We then went through the servers and located the effected files, established the mechanism used and helped them define the time frame of compromise so they could through their account records to find all the affected customers.

Unfortunately for our client, it was the Christmas season and one their busiest time of year. Luckily for the client it happened after Black Friday which IS their busiest time of the year. After identifying the access, modifications and exfil methods we turned our focus to the developers.

We talked to the attorney and came up with a game plan. First we would inform them that we needed to examine each of their workstations to make sure they were not compromised and open for re-exploitation, which was true. Then we would go back through their emails, chat logs and forensic artifacts to understand what they knew and or did when they were first notified of the breach. Lastly we would bring them in to be interviewed to see who would admit to what.

Imaging the computers was uneventful as you always hope it was, but the examination turned out to be very interesting. The developers used Skype to talk to each other and if you've ever analyzed Skype before you know that by default it keeps history Forever. There in the Skype chats was the developers talking to each other about the breach when it happened, asking each other questions about the attackers ip address, passing links and answers back to each other.

And then.... Nothing

Step 2. Investigating the developers

You see investigations are not strictly about the technical analysis in many cases, some are though, there is always the human element which is why I've stayed so enthralled by this field for so long. In this case the developers were under the belief that they were going to be laid off after Christmas so rather than take action they decided it wasn't their problem and went on with their lives. They did ask the hosting provider for recommendations of what to do next, but never followed up on them.

A month later they got informed they were not being laid off, and instead were going to be transferred to a different department. With the knowledge that this was suddenly their problem again they decided to actually look at the hosted system and found the modified code.


Step 3. Wrapping it up

So knowing this and comparing notes with the attorney we brought them in for an interview.

The first round we simply asked questions to see what they would say, who would admit what and possibly who could keep their jobs. When we finished talking to all the developers, all of which pretended to know nothing of the earlier date we documented their 'facts' and thanked them.

Then we asked them back in and one fact at a time showed them what we knew. Suddenly memories returned, apologies were given and the chronology of events was established. As it turns out the developers never notified management of the issue until the knew they were going to remain employed and just sat on the issue.

Needless to say, they no longer had that transfer option open as they were summarily terminated.

So in this case a breach that should have only lasted 4 hours at most (time of login notice by SOC to time of remediation) lasted 30 days of Christmas shopping because the developers of the eCommerce site committed the crime of silence for purely human reasons.


Windows, Now with built in anti forensics!

$
0
0
Hello Reader,
             If you've been using a tool to parse external storage device storage devices that relies on USB, USBStor, WPDBUSENUM or STORAGE as its primary key for fining all external devices you might be being tricked by Windows. Windows has been doing something new (to me at least) that I first observed in the Suncoast v Peter Scoppa et al case (Case No. 4:13-cv-03125) back in 2015 where Windows on its own and without user request is removing unused device entries from the registry on a regular basis driven by Task Scheduler.

This behavior that I've observed in my case work started in Windows 8.1 and I've confirmed it in Windows 10. A PDF I found that references this found here states he has seen it in Windows 7 but I can't confirm this behavior. The behavior is initiated from the Plug and Play scheduled task named 'Plug and Play Cleanup' as seen in the following screenshot:



I've found very few people talking about this online and even fewer DFIR people who seem to be aware, I know we are going to add it to the SANS windows forensics course. According to this post on MSDN the scheduled task will remove from the most common device storage registry keys all devices that haven't been plugged in for 30 days. When this removal happens like all other PnP installs and uninstalls it will be logged in setupapi.dev.log and here is an example of such an entry:

">>>  [Device and Driver Disk Cleanup Handler]
>>>  Section start 2017/04/08 18:54:37.650
      cmd: taskhostw.exe
     set: Searching for not-recently detected devices that may be removed from the system.
     set: Devices will be removed during this pass.
     set: Device STORAGE\VOLUME\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#07075B8D9E409826&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} will be removed.
     set: Device STORAGE\VOLUME\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#07075B8D9E409826&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} was removed.
     set: Device SWD\WPDBUSENUM\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#07075B8D9E409826&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} will be removed.
     set: Device SWD\WPDBUSENUM\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#07075B8D9E409826&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT13 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT13 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT14 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT14 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT15 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT15 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT16 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT16 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT17 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT17 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT18 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT18 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT19 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT19 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT20 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT20 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT21 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT21 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT22 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT22 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT23 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT23 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT24 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT24 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT25 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT25 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT26 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT26 was removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT27 will be removed.
     set: Device STORAGE\VOLUMESNAPSHOT\HARDDISKVOLUMESNAPSHOT27 was removed.
     set: Device USB\VID_13FE&PID_5200\07075B8D9E409826 will be removed.
     set: Device USB\VID_13FE&PID_5200\07075B8D9E409826 was removed.
     set: Device STORAGE\VOLUME\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#070A6C62772BB880&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} will be removed.
     set: Device STORAGE\VOLUME\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#070A6C62772BB880&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} was removed.
     set: Device USBSTOR\DISK&VEN_&PROD_&REV_PMAP\070A6C62772BB880&0 will be removed.
     set: Device USBSTOR\DISK&VEN_&PROD_&REV_PMAP\070A6C62772BB880&0 was removed.
     set: Device USB\VID_13FE&PID_5500\070A6C62772BB880 will be removed.
     set: Device USB\VID_13FE&PID_5500\070A6C62772BB880 was removed.
     set: Device SWD\WPDBUSENUM\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#070A6C62772BB880&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} will be removed.
     set: Device SWD\WPDBUSENUM\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#070A6C62772BB880&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B} was removed.
     set: Device USBSTOR\DISK&VEN_&PROD_&REV_PMAP\07075B8D9E409826&0 will be removed.
     set: Device USBSTOR\DISK&VEN_&PROD_&REV_PMAP\07075B8D9E409826&0 was removed.
     set: Devices removed: 23
     set: Searching for unused drivers that may be removed from the system.
     set: Drivers will be removed during this pass.
     set: Recovery Timestamp: 11/11/2016 19:51:27:0391.
     set: Driver packages removed: 0
     set: Total size on disk: 0
<<<  Section end 2017/04/08 18:54:41.415
<<<  [Exit status: SUCCESS]"

Followed one of these for each device identified:
>>>  [Delete Device - STORAGE\VOLUME\_??_USBSTOR#DISK&VEN_&PROD_&REV_PMAP#07075B8D9E409826&0#{53F56307-B6BF-11D0-94F2-00A0C91EFB8B}]
>>>  Section start 2017/04/08 18:54:37.666
      cmd: taskhostw.exe
<<<  Section end 2017/04/08 18:54:37.704
<<<  [Exit status: SUCCESS]

Now the setupapi.dev.log isn't the only place these devices will remain. You will also find them in the following registry keys in my testing:
System\Setup\Upgrade\PnP\CurrentControlSet\Control\DeviceMigration\Classes\
System\Setup\Upgrade\PnP\CurrentControlSet\Control\DeviceMigration\Devices\SWD\WPDBUSENUM
System\Setup\Upgrade\PnP\CurrentControlSet\Control\DeviceMigration\Devices\USBSTOR
System\MountedDevices\
NTUSER,DAT\Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\

Note that these removals do not effect the shell items artifacts (Lnk, Jumplist, Shellbags) that would be pointing to files accessed from these devices, just the common registry entries that record their existence. 

So why is this important? If you are being asked to review external devices accessed in a Windows 8.1 or newer system you will have to take additional steps to ensure that you account for any device that hasn't been plugged in for 30 days. In my testing both the Tzworks USP tool and the Woanware USBDeviceForensics tool both will miss these devices in their reports.

So make sure to check! It's on by default and there could be a lot of devices you miss. 

Forensic Lunch with Paul Shomo, Matt Bromiley, Phil Hagen, Lee Whitfield and David Cowen

$
0
0
Hello Reader,
     It's been awhile since I've cross posted that the videocast/podcast went up. We had a pretty great forensic lunch with lots of details about programs that are relevant frome everyone from academic students in forensics to serious artifact hunters. Here are the show notes:


Paul Shomo comes on to talk about Guidance Software's new Forensic Artifact Research Program where you can get $5,000 USD just for research you are already doing! Find out more here: https://bugcrowd.com/guidancesoftware?preview=114da7695ff86ae70ec01aaf2c6878b0&utm_campaign=9617-Forensic_artifact-20170426&utm_medium=Email&utm_source=Eloqua

Phil Hagen introduced the new SANS Network Forensics poster to be released later this month

Matt Bromiley is talking about the Ken Johnson Scholarship setup by SANS and KPMG you can learn more and apply here https://digital-forensics.sans.org/blog/2017/03/03/ken-johnson-dfir-scholarship

Phil, Matt, Lee and I talked about the DFIR Summit

Lee Whitfield and I talked about the 4Cast Awards, Voting is open here: https://forensic4cast.com/forensic-4cast-awards/



You can watch the lunch here: https://www.youtube.com/watch?v=hyRNh78GY2M
You can listen to the podcast here:http://forensiclunch.libsyn.com/forensic-lunch-42817
Or you can subscribe to it on iTunes, Stitcher, Tunein and any other quality podcast provider. 

Contents in sparse mirror may be smaller than they appear

$
0
0
By Matthew Seyer

As many of you know, David Cowen and I are huge fans of file system journals! This love also includes all change journals designed by operating systems such as FSEvents and the $UsnJrnl:$J. We have spent much of our Dev time writing tools to parse the journals. Needless to say, we have lots of experience with file system and change journals. Here is some USN Journal logic for you.

USN Journal Logic

First off it is important to know that the USN Journal file is a sparse file. MSDN explains what a sparse file is: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365564(v=vs.85).aspx. When the USN Journal (referred to as $J from here on out) is created it is created with a max size (the area of allocated data) and an allocation delta (the size in memory that stores records before it is committed to the $J on disk). This is seen here: https://technet.microsoft.com/en-us/library/cc788042(v=ws.11).aspx.

The issue is that many forensics tools cannot export a file out as a sparse file. Even if they could only a few file systems support them and I don’t even know if a sparse file on NTFS is the same as a sparse file on OSX. But this leads to a common problem. The forensic tool sees the $J as larger than it really is:





While this file is 20,380,061,592 bytes in size, the allocated portion of records is much smaller. Most forensic tools will export out the entire file with the unallocated data as 0x00. Which makes sense when you look at the MSDN Sparse File section (link above). When we extract this file out with FTK Imager we can verify with the windows command `fsutil sparse` to see that the exported file is not a sparse file (https://technet.microsoft.com/en-us/library/cc788025(v=ws.11).aspx):


 Trimming the $J

Once its exported out what’s a good way to find the start of the records? I like to use 010 Editor. I scroll towards the end of the file where there are still empty blocks (all 0x00s) then I search for 0x02 as I know I am looking for USN Record version 2:



Now if I want to export out just the record area I can start at the beginning of this found record and select to the end of the file and save the selection as a new file: 


The resulting file is 37,687,192 bytes in size and contains just the record portion of the file.



This is significantly smaller in size! Now, how do we go about this programmatically?

Automation

While other sparse files can have interspersed data, the $J sparse file keeps all of its data at the end of the file. This is because you can associate the Update Sequence Number in the record to the offset of the file! If you want to look at the structure of the USN record here it is: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365722(v=vs.85).aspx. Now I will note that I would go about this two different ways. One method for a file that has been extracted out from one tool and a different method for if it would be extracted out using the TSK lib. But for now, we will just look at the first scenario.

Because the records are located in the lasts blocks of the file, I would start from the end of the file and work our way backwards to find the first record, then write out just the records portion of the file. This saves a lot of time because you are not searching through potentially many gigs of zeros. You can find the code at https://github.com/devgc/UsnTrimmer. I have commented the code so that it is easy to understand what is happening.

Now lets use the tool:

We see that the usn.trim file is the same as the one we did manually but lets check the hash to make sure we have the same results as the manual extract:


So far I have verified this on SANS408 image system $J extract and some local $J files. But of course, make sure you use multiple techniques to verify. This was a quick proof of concept code.

Questions? Ask them in the comments below or send me a tweet at @forensic_matt

National Collegiate Cyber Defense Competition Red Team Debrief 2017

$
0
0
Hello Reader,
        I've been busy lately, so busy I didn't get around to posting this years red team debrief from the National CCDC. After just leaving Blackhat/ Bsides LV/ Defcon and running our first Defcon DFIR CTF I thought it was important to get these up and talk about the lessons learned.

The Debrief

First of all for those of you coming just to get the presentation its below:
here: https://www.dropbox.com/s/fy23c7wi35qe81b/NCCDCRedTeamDebrief2017.pptx?dl=0

For those of you who have no idea what any of these means, let me take a step back.

What is CCDC?


The National Collegiate Cyber Defense Competition ( CCDC ) is a now 12 year old competition where colleges around the United States form student teams to defend networks. CCDC is different from other competition involving network security as it focuses strictly on defense. Students who play are put in charge of a working network that they must defend, the only offensive activity in the competition comes from a centralized red team.

The kind of enterprise network students take charge of changes each year. Past years business scenarios have included:

  • Private Prison Operator
  • Electric Utility
  • Web hosting
  • Game Developer
  • Pharma
  • Defense Contractor
  • and more!
The idea is that the last IT team has been fired and the student team is coming in to keep it running and defend it. While the students are working on making sure their systems are functioning they also have to watch for, respond to and defend against the competition red team. 

Scoring happens a couple ways. 

Students get points for:
  • Keeping scored services running (websites, ecommerce sites, ssh access, email, etc..)
  • Completing business requests such as policy creation, password audits and disaster recovery plans
  • Presenting their work to the CEO of the fake company
  • Responding to customers 


Students lose points for:

  • Red team access to user or administrative credentials
  • Red team access to PII data
  • Services not responding to scoring checks aka services being down
  • SLA violations kick in if the service stays down for a period of time
There are now 160 universities competing in 10 regions across the united states. If a student team wins their region they make it to nationals where the top 10 teams in the country compete for some pretty amazing prizes, including on the spot job offers from raytheon. 


If you are a student or a professor who would like to know more about competing you can go here: http://nccdc.org/index.php/competition/competitors/rules

What is the National CCDC Red Team?

The National CCDC Red Team is a group of volunteers who works to build custom malware, c2 and exfiltration and persistence strategies to bear each year to give the students the best real world threat experience. I'm the captain of the red team and have been for the last 10 years.



How do I get on it? 

When the call for volunteers goes out send a resume to volunteer@nccdc.org.

Be advised our threshold for acceptance is very high and we look for the following:
- Active projects on github or otherwise to show your experience
- Real experience in developing, maintaining and layering persistence
- Custom malware kits that are unpublished to bring to bear

We don't care about certs, years of experience or who you work for. We need people who can not only get in (the easy part) but to stay in over a two day period of competition while an aggressive group of defenders seeks to keep you out. 

FSEventsParser 3.1 Released

$
0
0
By Nicole Ibrahim

G-C Partners' FSEventsParser python script 3.1 has been released. Version 3.1 now supports parsing macOS High Sierra FSEvents.

You can get the updated script here: https://github.com/dlcowen/FSEventsParser 

Prior versions of the script do not support High Sierra parsing, so it's important to upgrade to the current version of FSEventsParser.

Other recent updates include:

  • Better handling of carved gzip files has been added. Invalid record entries in corrupted gzips are now being excluded from the output reports.
  • Even more dates are being found using the names of system and application logs within each fsevent file. The dates are stored in the column 'approx_dates(plus_minus_one_day)' and indicates the approximate date or date range that the event occurred, plus or minus one day.
  • Script now reads a json file that contains custom SQLite queries to filter and export targeted reports from the database during parsing.

macOS High Sierra 10.13 and FSEvents

With the release of High Sierra, updates to the FSEvents API resulted in the following changes:
  • Magic Header: In macOS versions prior to 10.13, the magic header within a decompressed FSEvents log was '1SLD'. Beginning with 10.13, the magic header is now '2SLD'.
  • ItemCloned Flag: The ItemCloned flag was introduced with macOS 10.13.  When set, it indicates that the file system object at the specific path supplied in the event is a clone or was cloned. 
  •  File System Node ID: Beginning with 10.13, FSEvents records now contain a File System Node ID. 
    • e.g. If FSEvents were from an HFS+ formatted volume, this value would represent the Catalog Node ID.

FSEventsParser Database Report Views

Within the SQLite database, report views have been added for common artifacts. The report views are defined in the 'report_queries.json' file. They include:

  • Downloads Activity
  • Mount Activity
  • Browser Activity
  • User Profile Activity
  • Dropbox Activity
  • Email Attachments Activity
  • and more..
To access the report views, open the SQLite database generated by running the script using your SQLite viewer of choice. Expand "Views".




FSEventsParser Custom Reports

The FSEventsParser script now exports custom report views from the database during processing to individual TSV files.


The custom report views are defined in the file 'report_queries.json' which is also available on GitHub.

Users can modify the queries or add new ones to the json file using a text editor. Two examples are shown below: TrashActivity and MountActivity.

To add new queries to the json processing list, follow the json syntax shown below. Define the report views within the 'processing_list' array. To add a new item to the array, define:
1) 'report_name': The report/view name.
2) 'query': The SQLite query to be run.

Notes:

  • The report name must be unique and must match the view name in the SQLite query. e.g.
    • 'report_name': 'TrashActivity'
    • 'query':'CREATE VIEW TrashActivity AS ....'

  • The query follows standard SQLite syntax, must be valid, and is stored in the json file as a single-line string value.



FSEventsParser Usage

All options are required when running the script. 

==========================================================================
FSEParser v 3.1 -- provided by G-C Partners, LLC
==========================================================================

Usage: FSEParser_V3.1.py -c CASENAME -q REPORT_QUERIES -s SOURCEDIR -o OUTDIR

Options:
-h, --help show this help message and exit
-c CASENAME The name of the current session, used for naming standards
-q REPORTQUERIES The location of the report_queries.json file containing custom report
queries to generate targeted reports
-s SOURCEDIR The source directory containing fsevent files to be parsed
-o OUTDIR The destination directory used to store parsed reports

 Below is an example of running the script.



For more information about FSEvents and how you can use them in your investigation visit http://nicoleibrahim.com/apple-fsevents-forensics/.

If you have any comments or questions, please feel free to leave them below.

2018 Updates and Teaching SANS Windows Forensics FOR500 in Singapore

$
0
0
Hello Reader,
        I know the blog has been quiet, but if you didn't know the Youtube channel has been active you can find it here, http://www.youtube.com/learnforensics. For those who listen to the podcast I'm sorry I haven't gotten it up to date with the videos of the Forensic Lunch, I'll see about getting that done this month.

Speaking of this month we will have two Forensic Lunches this month:

2/16/18 - Ashley Hernandez and Joe Sylve from Blackbag talking APFS and a whole bunch of other new stuff
2/23/18  - Guests being Confirmed

Broadcasts go live on the Youtube channel, http://www.youtube.com/learnforensics, and subscribing gets you notifications when we go live.

My goal is still two Forensic Lunches a month with a mix of blog posts and night time live Forensic Test Kitchens through the month to keep me engaged and pushing me to share. Myself and the team at G-C Partners are working on some really cool stuff to share this year so please stay tuned.

Speaking of things that keep pushing me to answer new questions and share I'm having a big of a global journey this year in my teaching schedule at SANS.  With my first public class starting March 19, 2018 in Singapore.

https://www.sans.org/event/secure-singapore-2018/instructors/david-cowen

Also for those in the Singapore area I will be giving a public SANS At Night talk about Anti Anti Forensics showing our file system journaling forensic research for the first time in Asia.

I don't know if I'll ever have the time to do as much travel as I'm doing this year again so if you are in the Asia Pacific Rim I'll be in Singapore in March and Australia in June

https://www.sans.org/event/cyber-defence-canberra-2018/course/windows-forensic-analysis


If you really want to go in depth with Windows forensics I would encourage you to sign up and spend a week with me deep diving into interactions, limitations and workarounds for all the artifacts you know and love. In addition I love it when students ask me questions I've never considered before and we find new artifacts or meanings together in class! Also this is the only venue where I walk through how to use our Triforce tools.

For those in Europe or class by I'll be in London and Amsterdam this year:
Amsterdam in May
https://www.sans.org/event/amsterdam-may-2018/course/windows-forensic-analysis

London in September
https://www.sans.org/event/london-september-2018/course/windows-forensic-analysis

Otherwise thanks for reading and lets look forward to all the new research we can complete in 2018.


National CCDC 2018 Redteam Debrief

$
0
0
Hello Reader,
       Another year of CCDC is over and another winner has been crowned.

For those of you just here for the presentation, here are this years debrief slides:
https://www.dropbox.com/s/o2fkwbjsefq1ixk/NCCDC2018.pptx?dl=0

For those of you looking for more:

This year at Nationals we had a lot of success as a red team From 0 knowledge (Except ips in scope) to plain text credentials in 3 minutes ensured that our initial load of persistence was successful. However like in all great pursuits it was not perfect. This year we attempted different delivery and propagation techniques that need to validate our malware was successfully implanted to make sure all systems are talking to us.

Speaking of talking to us this year teams got better at their egress filtering and locking down incoming services. This means we have to get better at backdooring existing services and work on techniques that don't require call backs that egress filtering will stop.

Lots to plan, lots to do for next year. 

Daily Blog #382: Sunday Funday 6/3/18

$
0
0
Hello Reader,
         It's been awhile since we've talked. It's been a couple years since I managed to complete Zeltser challenge writing a blog post a day for a year and in that time the blog has gotten pretty silent but our work hasn't stopped. What stopped was a requirement to keep sharing and posting, instead I fell into my old habits of wanting the perfect example/infrastructure setup before I posted it. So to correct this and to force myself to put our research out there I'm resuming the Zeltser challenge and what better way to do that then with a Sunday Funday forensic contest.

You'll notice that I'm now going to letting these contests run for a week rather than a day. With Phil Moore's this week in forensics posts I don't see a need for Saturday reading posts anymore on my blog, so instead it will be Sunday Funday contests with Solution Saturdays where the winner will be posted.  Hopefully this will get more people playing.

The Prize:
$100 Amazon Giftcard

The Rules:

  1. You must post your answer before Friday 6/8/18 7PM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com. Please state in your email if you would like to be anonymous or not if you win.
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post



The Challenge:
One of the things I've noticed when people talk about psexec execution is the prefetch file it creates when running psexecsvc. There are many more artifacts that we've seen in our research so now it's time for you to show all of us what you know. 

List out with a description:
1. Every location where psexecsvc would be logged as executed on Windows 10 with the most current update
2. Every location where psexecsvc would be logged as existing on Windows 10 with the most current update
3. Every location that would be created and or modified based on psexecsvc executing 
Viewing all 877 articles
Browse latest View live