Quantcast
Channel: Another Forensics Blog
Viewing all 40 articles
Browse latest View live

SQLite Deleted Data Parser Update - Leave no "Leaf" unturned

$
0
0
One of the things I love about open source is that people have the ability to update and share code.  Adrian Long, aka @Cheeky4n6Monkey, did just that. Based upon some research, he located additional deleted data that can be harvested from re-purposed SQLite pages - specifically the Leaf
Table B-Tree page type. He updated my code on GitHub and BAM!! just like that, the SQLite Deleted Data parser now recovers this information.

He has detailed all the specifics and technical goodies in a blog post, so I won't go into detail here. It involved a lot of work and I would like to extend a huge thank you to Adrian for taking the time to update the code and for sharing his research.

You can download the most recent version on my GitHub page. I've also update the command line and GUI to support the changes as well.

Searchhs.dat and the Bing Bar

$
0
0


I recently worked a case where I located some relevant information in a file called "searchhs.dat".  This file was located in the users directory under "\AppData\Local\Microsoft\BingBar\Apps\
Search_6f21d9007fa34bc78d94309126de58f5\VersionIndependent" and  "\AppData\LocalLow\Microsoft\Search Enhancement Pack\Search Box Extension\" (Note - this was on Windows 7).

The Bing Bar is a free add-on  from Microsoft that integrates with Internet Explorer.  For more information, read here.  Users can search directly from the Bing Bar and the search terms are stored in the searchhs.dat file mentioned above. 

 A quick Google led me to two programs that had the ability to parse this file: sep-history-viewer and ESPv2. sep-history-viewer displays the  record id, term length, search term, count and a time stamp of the last search.  At this time, ESPv2 only displays the search terms.[Edit - as of 7/18/12, ESPv2 now supports record id, term length, count and time stamp in addition to the search terms]

Now it was time to dig deeper and verify the results with the raw data.  I opened up the searchhs.dat file in HEX view and  saw the URL entries.  I did some quick research and was not able  to locate the file format specification for the file. The author of ESPv2  kindly responded to an email and gave me the repeating header and record id.

However, the information I was really interested in verifying was the date. I (begrudgingly) installed the Bing Bar, fired up IE and did some testing.  After running several searches and reviewing the file in HEX (using  X-Ways) I was able to determine the location and format of the last searched for time stamp along with a few other things:

Black:   Repeating header
Blue :    Record ID
Yellow: Count - appears to increase each time the search term is used/selected
Red:     UTC Date in 128 bit System Structure (decode works nicely to convert)
Purple:  URL length


I have a feeling I am probably missing some posts/blogs/articles that were made on this, but thought I would add to the collection  just in case.  Remember - do not take my word as the "be all to end all" and test, test, test!














Windows Backup and Restore

$
0
0

A recent investigation led me to a Windows Backup file.  Windows 7 as well as Windows Vista includes a utility allowing the user to backup and restore folders, files and system information. This is not the same as Volume Shadow Copies (VSCs), another method wherein Windows backs up files.  For information on how to examine VSCs  check out Harlan Carvey's book, or other blog posts here and here.  Depending on the version on Windows, the backup can be stored on an external device, such as USB drive or over the network (Windows 7 Pro/Ultimate).   My research was done with Windows 7 Home Premium and Ultimate.

Windows creates a backup with the following naming convention:
ComputerName\Backup File YYYY-MM-DD ######\Backup files ##.zip




Interestingly enough, if an end user looks at this backup through Windows, they will only see the top level folder:

 



Windows Backup creates multiple zip files containing the files/folders that where backed up. True, if you mount the zip files in your favorite all in one forensic tool you will have access to all these files in their glory. You can run keyword searches until you are giddy, and forensicate to your heart’s content, BUTthe dates in the zip file are the dates the backup was created, not the date the file was originally created or modified.  That being said, Windows Backup tracks these original dates which may come in handy.

Windows Backup tracks the names of the folders, files and original dates in a file named GlobalCatalog.wbcat under ComputerName\Backup File YYYY-MM-DD ######\Catalogs. If you do not have access to the back up media, a local GlobalCatalog.wbcat file is created. I discuss this in more detail below.
 
Ideally, this file could be parsed for all of this information, with the results displayed in a nice format, CSV or otherwise.  I have been looking at this file in hex trying to figure out a way to accomplish this. So far, I have located the file names, folders and dates, but have not figured out how the records are tied together within the file.  Boooo…. If you know of any existing program or script that can parse the data, or know the file format, please let me know. If you are interested in seeing a sample of what I have located so far, contact me (arizona4n6 at gmail dot com) and I can send it to you.

As such, viewing the backup file natively through Windows Backup is the only method I have discovered  to see the original dates for the files and folders. Step by Step directions follow: 
  •  Export the backup files from your image to an external device. If you prefer to mount the image, create a VHD using Vhdtool  on a DD image and attached the VHD through the Disk Manager. Make sure its a copy of your image as Vhdtool will make changes to it.  This should sound familiar if you have read Harlan's Post on using the Vhdtool to examine VSCs. I tried to mount the image using FTK Imager and the backup file was not seen by Window's Backup.
  •  Launch Windows Backup and Restore (Control Panel>System and Security>Backup Your Computer).
  •   Got to Restore>Select another backup to restore files from. It should auto locate the Windows Backup.


  • Next, Search for *.*, and all the files will be listed or you can browse to a particular file if you please. By default, only the Date Modified is listed.  If you right click the title bar, you can select the Date Created as well. If you use the Browse function instead of Search, you will also have the option to see the backup date.



Now, instead of seeing all the same dates and times for the files contained within the zip files, you are presented with the original Date Created and Date Modified for files. As I mentioned before, it would be soooooo nice to have this information parsed directly from theGlobalCatalog.wbcatfile.


Windows Backup Registry Entries
When a Windows Backup is created an entry is made or updated in the Software Hive under the key \Microsoft\Windows\ CurrentVersion\WindowsBackup\.

This key holds various sub keys with information regarding the backup including USB device information. This USB information may come in handy if you are also conducting link analysis/USB analysis and can be cross referenced with other registry keys.

Some of the information available with sample data :

Target Device

For a USB Device:

  PresentableName = E:\
  UniqueName = \\?\Volume{a2e6b4d4-e492-11e1-a39d-000c29448ee3}\
  Label = MYTHUMBDRIVE
  DeviceVendor  = SanDisk
  DeviceProduct  = Cruzer
  DeviceVersion  = 1.26
  DeviceSerial = 200605999207D70370EF         

 For a Network Share:


  PresentableName = \\COMPUTERNAME\Users\Public\Documents\backup\
  UniqueName = \\?\UNC\COMPUTERNAME\Users\Public\Documents\backup\


Status
  
  LastResultTime = Sun Aug 12 17:45:39 2012 (UTC)
  LastSuccess = Sun Aug 12 17:45:39 2012 (UTC)
  LastResultTarget = \\?\Volume{a2e6b4d4-e492-11e1-a39d-000c29448ee3}\
  LastResultTargetPresentableName  = E:\
  LastResultTargetLabel = MYTHUMBDRIVE


According to my testing, the LastResultTime and LastSuccess will be the same if the backup completed. If the backup did not complete or was cancelled, these times will be different, and the LastResultTime will contain the time of the attempted backup.

I have created an Reg Ripper plugin and passed it along.  It should be included in the next disto.
 
Other Artificats
A Volume Shadow Copy is created before the backup.

Event log entries in \Windows\System32\winevt\LogsMicrosoft-Windows-WindowsBackup%4ActionCenter.evt

Local GlobalCatalog files created:

    \System Volume Information\Windows Backup\Catalogs\GlobalCatalogCopy.wbcat

    \System Volume Information\Windows Backup\Catalogs\GlobalCatalog.wbcat

This local GlobalCatalog.wbcat file seems to contain not only entries for the last backup, but for previous backups done, as well as previous media used. This could be helpful if you need to locate/subpena various devices that contain backups. Below are some results from running Strings across this file:

COMPUTERNAME\Backup Set 2012-08-11 213315\Backup Files 2012-08-11 213315\Backup files 1.zip
\\?\Volume{177d1d16-e2fc-11e1-914b-ec9a745b406c}\
SanDisk
Cruzer
1.26
200605999207D70370EF
COMPUTERNAME\Backup Set 2012-08-11 213315\Backup Files 2012-08-11 213315\Backup files 2.zip
Backup Set 2012-08-12 194644
COMPUTERNAME\Backup Set 2012-08-12 194644\Backup Files 2012-08-12 194644\Backup files 1.zip
\\?\Volume{45f45fcd-e269-11e1-a36e-ec9a745b406c}\
Kingston
DataTraveler SE9
PMAP
COMPUTERNAME\Backup Set 2012-08-12 194644\Backup Files 2012-08-12 194644\Backup files 2.zip
COMPUTERNAME\Backup Set 2012-08-12 194644\Backup Files 2012-08-12 203800\Backup files 1.zip
COMPUTERNAME\Backup Set 2012-08-12 194644\Backup Files 2012-08-12 203800\Backup files 2.zip

As I mentioned before, I am trying to figure out the GlobalCatalog file format, so if you know the file format, or any tools that can parse it, please let me know :-) 

Who's your Master? : MFT Parsers Reviewed

$
0
0
The Master File Table (MFT) contains the information related to folders and files on an NTFS system. Brian Carrier (2005) stated “The Master File Table is the heart of NTFS because it contains the information about all files and directories” (p. 274) Many of the forensics tools such as EnCase, FTK and X-Ways parse the MFT to display the file and folder structure to the user.

During Incident Response, there could be hundreds if not thousands of computers to examine. A way to quickly review these systems for Indicators of Compromise (IOCs) is to grab the MFT file rather than take a full disk image. The MFT file is much smaller in size than a disk image and can be parsed to show existing as well as deleted files on a system.

During a case, I noted some anomalies with a tool that I use to accomplish this task, AnalyzeMFT. This led me to do some testing and verification of several MFT parsers – and I was a little surprised with the results. Foremost, I would like to say that I am appreciative to all the authors of these tools.  My intent with this post is to draw attention to understanding the outputs of these tools so that the examiner can correctly interpret the results.

Many of the differences and issues arose due the handling of deleted files. The documentation of one of the tools I tested, MFTDump, explains the issues with deleted files in the MFT:
"Since MFTDump only has access to the $MFT file, it is not possible to “chase‟ down deleted files through the $INDEX_ALLOC structures to determine if the file is an orphan. Instead, the tool uses the resident $FILE_NAME attribute to determine its parent folder, and follows the folder path to the root folder. In the case of deleted files, this information may or may not be accurate. To determine the exact status of a deleted file, you need to analyze the file system in a forensic tool."
Some of the tools did not notify the examiner that the file path associated with the deleted file may be incorrect  – which could lead to some false conclusions.

There are a lot of tools that parse the MFT. For this testing, I focused on tools that are free, command line and output the results into Bodyfile format. The reason I chose to do this is that when I parse the MFT, I am using it to create a timeline, usually in an automated fashion. The one exception to this was the tool MFTDump.  The output was a TSV file that I wrote a parser for that converted it into Bodyfile format.

There were four “things” that I was checking each tool for:  File Size, Deleted Files, Deleted File Paths and Speed. This criteria may not be important to everyone, but I’ll explain why these are important to me.
  1. File Size
    When looking for IOC’s, file size can be used to distinguish a legitimate file from malware that has the same name.  It could also be used in lieu of file hashes. Instead of hashing every file on the computer which can be time consuming, the hashed file's size can be used to do a comparison of the MFT file sizes to flag suspect files (thanks to @rdormi for that idea)
  2. Deleted Files
    MFT records can contain deleted file information. Does the output show deleted files? In some cases the attacker’s tools and malware have been removed from the system, so being able to see deleted files is nice.
  3. Deleted File Paths
    Is the tool able to resolve and display any portion of the previous file path for the deleted file? Knowing the parent path helps give context to the file. For example, it may be located under a user account, or a suspicious location, like a temp folder.
  4. Speed
    If I am processing thousands of machines, I need a tool that will parse the MFT relatively quickly. 10 minutes per machine or 1 hour per machine can make a big difference.

Findings

The tools I tested were AnalyzeMFT, log2timeline.pl, list-mft and MFTDump. Below is a summary of the findings. Further below, I explain the results in more detail, along with some sample data.

AnalyzeMFT
  1. Many files, both deleted and existing, show an incorrect file size of 0
  2. Deleted files were not designated as deleted in the output
  3. Deleted files where prepended with incorrect file paths
  4. Time to parse MFT: 11 minutes
 List-mft
  1.  File sizes were shown in the output
  2.  Deleted files were designated as deleted
  3.  No file paths were shown for deleted files
  4. Time to parse MFT: 1 hour, 49 minutes
Log2timeline.pl 
  1. No file sizes were shown in the output
  2. Deleted files were designated as deleted
  3. Deleted files were shown with correct file paths
  4. Time to parse MFT: 39 minutes
MFTDump 
  1.  File sizes were shown in the output
  2. Deleted files were designated as deleted
  3. Deleted files were enclosed  with ‘?’ to alert the examiner that file paths may be inaccurate
  4. Time to parse MFT: 7 minutes
Please note, I did not cross reference and verify every single file in the output. The observations made above were for the files that I reviewed.

What does this mean, or why are these results important?

No file size reported
The file size can help give context to a file. Having the file size can help determine if a file is suspect or not. If no file size is provided, this context is lost.

'0’ File size reported
The incorrect file size of ‘0’ can be misleading to an investigator. Take into consideration a RAM scraper output file. If an examiner is checking various systems and they see a file size of ‘0’, they might think the file is empty, when in fact, it could have thousands of credit card numbers written to it.

Files are not being reported/noted as deleted
Since there is no designation that the file is deleted, malware might appear to exist on a system, when in fact, it has been deleted. A suspect may have deleted a file and it is still showing as active in the output.

Deleted files are being associated with the wrong parent path
As noted above, due to issues with looking up the parent folder for deleted files, incorrect file paths were found to be prepended to deleted files. Even though a portion of the path may be correct, the prepended path could cause the examiner to draw an incorrect conclusion.

For example, many times a malware file will have a legitimate windows system name, such as svchost.exe. What flags the file as suspicious is where it was/is located. If the parent path is reportedly incorrectly, a malicious file may be missed. Or, a file may my attributed to an incorrect user account because the path is listed incorrectly.

Conclusion

Based on my testing and criteria, MFTDump seems to be the best fit for my process. It contains the file sizes, and designates between an active file and a deleted file. In the event that it recovers a file path for a deleted file, it lets the examiner know that it might be inaccurate by making a notation in the output.  If any important files are found using any of these tools, it would be prudent for the examiner to verify with a full disk image.

Sample Test Data

Below, I show some examples from the output for each tool. Although I did some testing and verification, it is up to each examiner to test their tools – I accept no liability or responsibility for using these tools and relying on my results. For demonstrative purposes only. :)

I used FLS from the Sleuthkit and X-Ways to check a deleted file. I then compared how this deleted file was handled with the different tools. I also used Harlan Carvey’s tools (bodyfile.exe and parse.exe) to convert the bodyfile generated by the tool into TLN format for readability.

The deleted file I reviewed was “048002.jpg”.  The path was shown as C:/$OrphanFiles/Pornography/048002.jpg (deleted) in both FLS and X-Ways.

Each of the outputs were grepped for the file 048002.jpg, and the entries located are displayed below in TLN format. I omitted the "Type" (File), "Host" (Computer1) and "User" (blank) columns in order to better display the results.

I have also included how long each process took. The system I used was Windows 7 with an Intel i7 and 16GB of RAM. The size of the MFT was about 1.8GB (which is much larger then most systems I process)
  
FLS Output
fls -m C: -f ntfs -r \\.\[Mounted Drive] >> C:\path\to\bodyfile

DateDescription
2076-11-29 08:54:34MA.B [4995] C:/$OrphanFiles/Pornography/048002.jpg (deleted)
2014-01-11 01:25:45..C. [4995] C:/$OrphanFiles/Pornography/048002.jpg (deleted)
2013-10-28 20:38:37MACB [124] C:/$OrphanFiles/Pornography/048002.jpg ($FILE_NAME) (deleted)

FLS was used as the baseline for the test, and the output was verified with X-Ways. It shows the file as a deleted Orphan file, with a partial recovered directly listing of "/Pornography/048002.jpg". According to The Sleuthkit documentation on orphan files:
"Orphan files are deleted files that still have file metadata in the file system, but that cannot be accessed from the root directory."
Fls took about 20 minutes to run accross the mounted image.

AnalyzeMFT Output
analyzeMFT.py -f"C:\path\to\$MFT" -b "C:\path\to\output\bodyfile.txt" --bodyfull -p

DateDescription
2013-10-28 20:38:37MACB [0] /Users/SpeedRacer/AppData/Roaming/Scooter Software/Beyond Compare 3/BCState.xml/Pornography/048002.jpg

AnalyzeMFT showed 0 for the file size. It had no designation in the output that flags if the file is deleted or active. Although it was able to recover the deleted file path "/Pornography/", it prepended the file path with a folder that currently exists on the system rather then identify it as an Orphan file.

This makes it appear to the examiner that this is an active file, under the location "Users/SpeedRacer/AppData/Roaming/Scooter Software/Beyond Compare 3/BCState.xml/Pornography",  when in fact, it is a deleted Orphan file.

During my review of the outputs, I noticed quite a few files were showing an incorrect file size of '0', including active files.  In the review of the open issues on github, these issues appear to have been noted.

I also ran AnalyzeMFT with the default output, a csv file. In this output, the file did have a flag designating it as deleted, however, the bodyfile format does not.

Log2Timeline.pl Output
log2timeline -z local -f mft -o tln -w /path/to/bodyfile.txt 


DateDescription
2014-01-11 01:25:45FILE,-,-,[$SI ..C.] /Pornography/048002.jpg (deleted)|UTC|  inode:781789

The “old” version of log2timeline has an –f  mft option that parses an MFT file into bodyfile format. The “new” version of log2timeline with Plaso does not have the option to parse the MFT separately (at least I coudnt find it.). log2timeline.pl was run from a SIFT Virtual Machine. For the VM, I gave the VM about 11GB of RAM, and 6 CPUs. With this setup, it took about 39 minutes to parse the MFT.

No file size was provided in the log2timeline for any files. The file is flagged as deleted, and includes the correct partial recovered path /Pornography/". Out off all the MFT tools I tested, this one most accurately depicts the deleted file path. However, it's interesting to note that it did not include the FileName attribute.

list-mft Output
list-mft.py "C:\path\to\$MFT">> "C:\path\to\output\bodyfile.txt"

DateDescription
2014-01-11 01:25:45,..C. [4995] \\$ORPHAN\048002.jpg (inactive)
2013-10-28 20:38:37,MACB [4995] \\$ORPHAN\048002.jpg (filename, inactive)

list-mft provided the file size, and a designation that the file was deleted (inactive). It also identified the file as an Orphan, however, it did not recover the partial path of /Pornography/. This may be important as the partial path can help provide context for the deleted file.

This program took the longest to run at 1 hour and 49 minutes. There is a -c, cache option that can be configured. This can be increased for better performance, however, I just used the default settings.

MFTDump Output
mftdump.exe "C:\path\to\$MFT"/o "C:\path\to\output\mftdump-output.txt"

DateDescription
2076-11-29 08:54:34MA.B [4995] ?\Users\SpeedRacer\AppData\Roaming\Scooter Software\Beyond Compare 3\BCState.xml\Pornography\048002.jpg?(DELETED)
2014-01-11 01:25:45..C. [4995] ?\Users\SpeedRacer\AppData\Roaming\Scooter Software\Beyond Compare 3\BCState.xml\Pornography\048002.jpg?(DELETED)
2013-10-28 20:38:37MACB [4995] ?\Users\SpeedRacer\AppData\Roaming\Scooter Software\Beyond Compare 3\BCState.xml\Pornography\048002.jpg? (DELETED)(FILENAME)

The file sizes are displayed, and a designation is included showing that the file has been deleted. Deleted files were enclosed  with ‘?’ to alert the examiner that file paths may be incorrect. This tool ran the fastest, clocking 7 minutes for a 1.8 GB MFT file. The output from this tool as a TSV file. I wrote a python script to parse it into bodyfile format.

To keep this post relativity short, I just demonstrated the output for one file, however, I used the same process on several files and the results were consistent. Whatever tool an examiner chooses to use will depend on their particular needs. For example, an examiner may not be interested in file sizes, and in this case they may choose to use log2timeline.  However, if speed is an issue, MFTDump might make more sense. As long as the examiner knows what information the output is portraying, and can verify the results independently, any of these tools can get the job done.

Carrier, B. (2005). File System Forensic Analysis. Upper Saddle River, NJ: Pearson Education

More on Trust Records, Macros and Security, Oh My!

$
0
0

There is a registry key that keeps track of which documents a user has enabled editing and macros for from untrusted locations. This happens when the user clicks the "Enable Editing" button on the Microsoft Office Protect View warning:



These can include documents that are downloaded from the Internet, or sent via email. This registry key is affectionately known as "Trust Records".  When a user clicks this warning, an entry is made under HKCU\Software\Microsoft\Office\15\Word\Security\Trusted Documents\TrustRecords that contains the file path to the document (the version number may vary - I've tested 14 and 15).
 
This is by no means a new artifact. There are several blog posts that discuss this artifact, including one by Andrew Case and Harlan Carvey - however, I believe I may have some new light to shed on this artifact. Well, I couldn't find the information by using Google, so it's new to me.

What I found was that an entry can exist under this key, but that does not necessarily mean that macros were enabled.In order to determine if macros were enabled, a flag/value needs to be checked in the binary data. Additionally, the Trust Center macro settings may need to be checked as well. The user can turn off this security prompt in the Trust Center and trust all documents by default. If this happens, no entry will be made under the Trust Records because all documents are trusted.

Why all the fuss over macros? Who uses them anyways??? Take for example the latest ransomware variant, Locky. Locky utilizes macros in a Word document to pull down it's payload. After a company get hits with something like this, they may want to know "How did this happen?" and "How can we prevent it in the future?".

The Trusted Records registry key can help answer these questions. Did the user take affirmative steps by enabling editing in the document? Did they take another step and enable the macros? If so, the company may need to spend more time training employees on better security practices. Was the system setup to trust all documents by default? If so, they may need to reconfigure their GPO.

The Trusted Records key can also contain references to artifacts that may no longer exist on the system, add context to your timeline, and demonstrates that a user explicitly interacted with the file. 

Trusted Records Registry Key

In Word 2010 (v.14)  and 2013 (v.15) there are actually two yellow banners presented to the user when macros are in a Word document. The first asks the user to "Enable Editing":


After this button is clicked, an entry is created in the registry with the document name, path and time stamp. According to some testing that Harlan did (and the testing I did confirmed this as well), the time stamp is the create date of the document, NOT the time the user enabled editing:


The output from the Regripper plugin trustrecords  is displayed below:

trustrecords v.20120716
Word
LastPurgeTime = Thu Oct  8 20:38:08 1970
Sat Feb 20 14:25:53 2016 -> %USERPROFILE%/Downloads/test-document.doc

At this point in time, I have NOT clicked the second button to enable macros, yet an entry was made under this key.

After I enable editing, a second banner pops up asking me if I would like to "Enable Content", which will enable the macros:


After I clicked this (based on my testing) the last for bytes in the binary data changes to FF FF FF 7F:



This means in order to determine if the user enable macros, these last four bytes needs to be checked.

Security Registry Key

The user can completely bypass this yellow banner by disabling the macro notifications. This means that an entry will not be recorded under the Trusted Document key even though the user ran a malicious document containing macros downloaded from the Internet. These setting are controlled by the Trust Center under Options > Trust Center > Macro Settings. There are four security levels to choose from:



These setting are stored under the registry key HKCU\Software\Microsoft\Office\15.0\Word\Security\. Based on my testing, if the user has not altered the default settings, this key does not contain the value "VBAWarnings". However, if changes are made to the default settings, an entry for VBAWarrnings will appear, and will have a DWORD value:


Based on my testing with Word 2015, these are the Macro Settings and corresponding values for the registry flag:

  • Disable all macros without notification : 4
  • Disable all macros with notification: 2
  • Disable all macros except digitally signed macros: 3
  • Enable all macros: 1

I believe these setting are also affect by a GPO, but I have not been able to confirm this yet through testing.

My testing was done using Office 2015 on Windows 7 and Office 2010 on Windows 10. These setting may also apply to Excel, Access and PowerPoint, but I have not tested these.


So, to summarize:

1) These artifacts may remain after the malicious document has been removed. They may also be shown in your timeline if you are using a tool like regtime.exe to add registry keys into your timeline.

2) If there is an entry for a document under Trusted Records, this does not necessarily mean that macros were enabled. The flag needs to be checked to make that determination.

3) If a document does not appear under this key, this does not mean that the macros were not able to run. They could still have ran if the default setting was altered to enable all macros by default.




Additional Resources:

NTUSER Trust Records

Plan and configure Trusted Locations settings for Office 2013

HowTo: Determine User Access To Files



QuickLook Python Parser - all your BLOBs belong to us

$
0
0
I've always mentioned in my presentations and blog posts that if anyone needs any help parsing an artifact, to hit me up - I love working on these types of projects in my spare time. Matthew Feilen (@mattevps) did just that. The artifact in need of parsing was the index.sqlite file which is part of the
OS X QuickLook feature. While an SQL query can pull most of the data, there is a plist file stored as a BLOB (Binary Large Object) that needs to be parsed. This BLOB has additional data that can be useful to an examiner. Read on for more details.

QuickLook Background

The QuickLook database stores information about thumbnails that have been generated on a Mac.
This information includes things like the file path to the original file, a hit count, the last date and time the thumbnail was accessed, the original file size and last modified date of the original file.

The cool thing is that this database can contain entries after a file has been deleted as well as entries for externally mounted volumes, like thumb drives. This database can also persist after a user account has been deleted since it's not located in a user directory. Sara Newcomer wrote an excellent white paper that details this artifact. I suggest reading her white paper for the finer points since my focus will be mainly on parsing the data out.

There is an index.sqlite file for each user on the system. These files are located under /private/var/folders/<random>/<random>/C/com.apple.QuickLook.thumbnailcache. The <random><random> will be different for each user. Since this database is not stored under a user's folder, you will need to tie the index.sqlite to a user by checking the permissions on the file. If you're on a live system, it's pretty easy to do with the ls -l command. However, if you have an image it may be a little more involved. One way I found to do this is to check the owner properties on the file, than cross reference this to the user's plist file. In the example below, I've used FTK Imager to view the UID of the index.sqlite file, which is 501:
UID of the index.sqlite file
Next, I exported out the user's plist file located under /private/var/db/dslocal/node/Default/users and used plist editor to locate the UID for that user:


UID in user plist file


Getting the Data Out

There are two tables of interest in the QuickLook index.sqlite file, the "files" table and the "thumbnails" table. Most of the information contained in these two tables can be pulled with an SQL query. In fact, a blog post by a "Dave" details how to write an SQL query to join these two tables and decode the Mac absolute timestamps. However, the "files" table contains a field named "version" that contains a BLOB. This BLOB, aka binary data, is a plist file:


This embedded plist file contains the last modified date of original file in Mac absolute time, the original file size, and the plugin used to generate the thumbnail:





While the SQL statement works on most of the data, an additional step is needed to parse the embedded plist file from the BLOB. The data in this plist file could be helpful to an examiner, especially if it contains information about a file that no longer exists on a system.

Python to the Rescue

There is a python library called biplist that does a great job of parsing binary plists. Using this library and the SQL syntax provided in the blog post by Dave, I was able to create a python parser in pretty short order for this artifact.


The syntax is pretty simple, just point it to the index.sqlite file:

quicklook.py -f index.sqlite >> output.tsv

If you don't already have biplist installed, it can be installed by running:

sudo easy_install biplist

I've also included a compiled executable for Windows on my github

The output looks like this with the parsed BLOB information in the Version column. As you can see in the example below, there is information for files on my system, as well as for files on a Lexar thumbdrive:


A huge thanks to Matt for contacting me about this artifact and supplying me with several test index.sqlite files. The Quicklook index.sqlite parser (both python and executable) can be downloaded from my github.


Resources:

Sara Newcomer's detailed article on the artifact: http://iacis.org/iis/2014/10_iis_2014_421-430.pdf
Dave's blog post covering the SQL query: http://www.easymetadata.com/2015/01/sqlite-analysing-the-quicklook-database-in-macos/


How to image a Mac with Live Linux bootable USB

$
0
0
One thing I've learned when it's comes to imaging Macs is it's good to have options. When encountering Macs, its seems like there is always a challenge. No firewire ports for target disk mode, no easy way to remove the hard drive, or if the the hard drive is removed, you don't have the specific adapter needed to connect your write blocker to the drive... and of course, encryption. I am planning on doing several blog posts about different ways to image a Mac. Depending on the situation, some may work, some may not, but I just wanted to throw some options out in the Google soup mix.

The first option I am going to go walk through is imaging a Mac with a Live Linux bootable USB. Many times cracking open something like a MacBook Air to grab a hard drive requires special tools and adapters which may not be readily available. If the Mac is already powered off, booting the Mac with a live Linux distro may be a good option. Once booted into Linux, an imaging tool with a GUI, like Guymager, can be used to create an image in E01 or dd format.

For this post, I have selected the CAINE distro. CAINE stands for Computer Aided Investigated Environment. This distro was made specifically for computer forensics. Upon boot, CAINE "blocks all the block devices (e.g. /dev/sda), in Read-Only mode." The examiner must take active steps, which includes nice big warnings, to turn off this feature.

While I did get Kali to work, it did not seem to offer the extra protection that CAINE did to keep the examiner for inadvertently mounting the wrong drive. If you are interested in making a Kali bootable USB drive for the Mac, I have included some brief instructions at the bottom of the post.

This method was tested with CAINE 7.0, Rufus 2.9, and a MacBook Air Early 2015 model

Create the Bootable USB

The first step is to create a bootable USB drive on a Windows machine. Download the CAINE iso and Rufus. Rufus is the Windows program that will create a bootable USB drive from the iso. Simply launch Rufus and select the CAINE iso as well as a blank USB drive bigger than 4GB. (NOTE - I tried various other tools to create the bootable USB drive, and not all of them worked when it came time to boot the Mac. Thanks to @APFMarc for the tip on Rufus). Below is a screen shot with the settings I used:


There was a pop up dialog box when I clicked start asking me to choose to write in ISO image mode or DD Image mode. I used the default, which was ISO mode.

Boot into Linux

Once completed, this USB drive can be used to boot the Mac. In order to boot a Mac from a USB device, it must be put into Startup Mode. This is done by holding down the Alt/Option button when the systems boots. Once in Startup Mode, the boot device can be selected. The CAINE USB should show up as the EFI Boot choice:



After CAINE boots, choose the "Boot Live system". If all goes well, the following desktop should appear:




CAINE has a utility called Mounter, which is located in the task bar. It's the tiny icon circled above. Double clicking this icon brings up a dialog box that shows which block devices are currently mounted:




 As demonstrated in the screen shot above, the only device that is currently mounted is the USB containing the CAINE distro (Shown as CAINLIVE). Running the df command also confirmed this:



The reason I like CAINE is that it does not let the examiner inadvertently mount a drive by accidentally clicking on something. For example, when I double clicked the Macintosh HD it gave me an error:




From the CAINE website documentation:
This new write-blocking method assures all disks are really preserved from accidentally writing operations, because they are locked in Read-Only mode.
If you need to write a disk, you can unlock it with BlockOn/Off or using "Mounter" changing the policy in writable mode. 
I personally prefer this extra layer of protection.

Mount the USB drive that will hold the image

Next, an external USB drive is needed to dump the image on. This external device needs to be mounted writable so the image can be placed on it. To do this, Read only mode needs to be turned off for any newly attached devices by using the Mounter program. Right clicking the Mounter icon in the task bar brings up the following dialog box:



I know this looks scary, all in red and what not, but clicking it brings up another dialog box confirming that this action will only make newly mounted devices writable, which is what is needed so the image can be dumped to the external drive:



After selecting Yes, a brief confirmation message pops up and the icon in the tray also turns red indicating the current status:


The next step is to plug in the USB drive that will contain the image. I named my external USB drive "Images" and formatted it with NTFS on a Windows system before beginning this process (FAT32 will work as well, just be aware of the 4GB file limit). Once the drive is plugged in, it can be mounted by opening up the "Caine's Home" folder on the desktop and double clicking the drive. This will mount the drive to /media/CAINE/YourDriveName:



Now the drive can be accessed to create folders, dump the image to etc. Note - if I try and do the same with the other devices on the host drive (e.g. BOOTCAMP and Macintosh HD), it will give me an error, thereby preventing me from accidentally mounting them.

Use Guymager to create the image

Now that the external USB drive is mounted, Guymager can be started to to create the image. Guymager is found on the Desktop, or under Menu>Forensic Tools>Guymager.

Once launched, select the device that needs to be imaged by right clicking it. In this example, the drive I want is the "ATA Apple SSD SM0256G":



The next step is to fill out all the requisite image data:



Once started, the previous table will show a status on the imaging process. When the image is complete Guymager will create a log file in the same directory as the image. An interesting tidbit - a while back, Eric Zimmerman did some testing on various imaging tools, and Guymager was one of the fastest :)


Kali live Linux bootable USB for Mac

So far, I've only found one method that works consistently to boot into Kali Linux on a Mac (at least on my test Mac). That method is to use the Mac Linux USB Loader on a Mac to create the bootable USB.

There is a video here that has step by step instructions for the Mac Linux USB Loader, but it's pretty straight forward to use. The basic steps are 1) Download Kali Linux; 2) Using Disk Utilities on Mac to format a USB drive with Fat32 and MBR; 3) Run Mac Linux USB Loader and select the Kali Iso; 4) Choose Kali from the distro type

You can now boot into Kali and use Guymager on a Mac using the same steps I detailed in the sections above. One very important thing to note - using this method will automatically boot you into the Kali Live environment and you will not be given the choice for the Kali Linux Forensics Mode.

In my limited testing it does not appear to mount the host drive, or make any changes to the drive. It also does not have the additional steps and warnings when it comes to inadvertently mounting drives that CAINE does. The Live version will also auto-mount plugged in USB devices. Proceed at your own risk, and as they state on the Kali website:
If you plan on using Kali for real world forensics of any type, we recommend that you don’t just take our word for any of this. All forensic tools should always be validated to ensure that you know how they will behave in any circumstance in which you are going to be using them
Echoing these same sediments, although I have walked though a method of imaging a Mac from a live Linux distro, please test and validate before using either of these methods in the the real world.

Happy Mac-ing!


How to image a Mac using Single User Mode

$
0
0
This is the second post in my series on different ways to image a Mac. My first post was on how to image a Mac with a bootable Linux distro. This post will cover another option, creating an image by booting a Mac into single-user mode. I plan on following up this post with posts on creating a live image and how to mount and work with FileVault encryption after an image is complete.

Single-user mode is a limited shell that a Mac can boot into before fully loading the operating system. In single-user mode, the internal hard drive is mounted read only and a limited set of commands are available. Once in single-user mode, a USB drive can be attached and dd can be used to create an image.

In order to mount the USB drive, the internal drive needs to be changed to read/write to create a mount point. While not as forensically sound as using a write blocker or booting into a Linux distro, less changes are made than fully booting the operating system to take a live image. This may be a good option where it is acceptable to get a live image, but the examiner wishes to minimize changes to the hard drive. Another benefit is that if there is FileVault encryption, the encrypted drive is decrypted after a username and password are supplied.

The system I used for testing was a Mac Mini, OS X Version 10.8.5 with one hard drive. Three partitions were created by default during the initial setup: an EFI partition, a MacOSX partition, and a recovery partition.

I tested two scenarios, one without encryption and one with encryption (FileVault 2). For each step I will cover both scenarios. The high level steps are:

1) Boot into single-user mode
2) Determine the disk to image
3) Mount the USB drive that will hold the image
4) Run the dd command to create the image

Step 1 - Boot into single-user mode
The first step is to boot into single-user mode. While the system is booting, select COMMAND-S to enter single-user mode. I usually hold down this key combo before I even power on the system so I don't accidentally "miss" it. At this time, I do not to have the USB drive that will hold the image plugged in.

Unencrypted
If the system is not encrypted a bunch of white text will scroll and finally present a shell with root:



Encrypted
If the system is encrypted, some text will fly by that says efiboot, and then a GUI window will pop up asking for the username and password:





After the username and password are entered, the single-user boot process continues and drops into a shell similar to the unencrypted system.

Step 2 - Determine what to image
The next step is to determine what block device to copy for the dd command. In order to determine this, use the ls command to get a list of the available disks under the /dev directory. As I mentioned before, I prefer to do this before I plug the USB drive in so I don't have to try and guess which is the internal hard drive and which is the USB drive. (OS X has a disk utility called diskutil that presents more verbose information about the disks, however, it is not available in single-user mode)

ls /dev/disk*


The output is slightly different between the encrypted and unencrypted drive, which I discuss below.

Unencrypted Drive
On the test unencrypted system there is one disk, disk0, with three partitions: disk0s1, 
disk0s2, and disk0s3.  For this particular system, the image should be of /dev/disk0:





Encrypted Drive
Note the addition of the /dev/disk1 on the encrypted system:



What is this /dev/disk1? Using file -sL on each partition can give a little bit more insight into what is going on. (Note - I ran these commands while in a terminal because there was no good way for me to get a screen shot in single-user mode...the text went all the way across the screen. However, the commands and outputs are similar while in single-user mode)


From these results I can tell that disk0s1 is the EFI partition, and disk0s3 is an HFS partition. disk0s2 is showing as "data". This happens when the file command can't tell what the file is, it just gives a generic "data" in response - which makes sense if the partition is encrypted.

Some quick math give us the partition sizes:

EFI disk0s1 size = 409600 sectors X 512 bytes per sector =  209715200 bytes = ~210 MB
HFS disk0s3 size = 4096 bytes per block X 158692 blocks = 650002432 bytes = ~650 MB

Next, I want to see what size disk0s2 is. I can use fdisk /dev/disk0s2 for this:



disk0s2 size = 1951845952 sectors X 512 bytes per sector = 999345127424 bytes =~999.3 GB. Definitely the biggest of them all!

Now I want to see how big /dev/disk1 is to compare it to the other partitions. Here I will use /dev/rdisk1 because /dev/disk1 is busy. /dev/rdisk is the raw disk of /dev/disk1:

rdisk1 size = 4096 bytes per block X 243898823 blocks = 999009579008 bytes =~ 999 GB

/dev/disk0s2 and /dev/disk1 are about the same size, 999GB, and /dev/disk1 is a readable HFS partition. Based on my experience and the outputs above, it appears /dev/disk1 is the OS X partition (disk0s2) in a decrypted state.

For imaging, either /dev/disk0 or /dev/disk1 can be used. If /dev/disk0 is used, all three partitions will be captured, but the data in the MacOSX partition - /dev/disk0s2 will remain in the encrypted state. If /dev/disk1 is imaged, it will have the MacOSX data in an decrypted state,but will not have partition 1 (EFI partition) and partition 3 (Recovery partition). I like to grab both /dev/disk0 and /dev/disk1.


Step 3 - Mount the external USB Drive 
The next step is to mount the external USB drive so the image can be saved onto it. The USB drive can be formatted in FAT32 or HFS. FAT32 has the benefit of both Windows and Mac being able to access it, but it has a 4GB file size limit. While HFS does not have the 4GB limit, Windows is not able to see it by default (if you have a Mac with bootcamp your Windows OS should be able to read HFS if the bootcamp drivers are installed).

For my tests I used a FAT32 USB drive for the unencrypted system, and an HFS USB drive for the encrypted system so I could demonstrate the syntax for both.

After plugging in the USB drive, run ls /dev/disk* again. Compare the outputs to determine which /dev device belongs to the USB drive

ls /dev/disk*


Unencrypted
For this system the FAT32 USB drive has been inserted, which shows up as /dev/disk1. The partition that needs to be mounted is /dev/disk1s1:




Encrypted
For this system the HFS USB drive has been inserted, which shows up as /dev/disk2. This drive has two partitions. The partition that needs to be mounted is /dev/disk2s2:



(If there are multiple partitions showing on the USB drive the file -sL command can be used to get more information if you're not sure which one to mount.)

Once you've determined the USB device keep this handy for the mount command. The next few commands and outputs are the same for the unencrypted and encrypted system.

In order to mount the USB drive, the system drive will need to be changed to read/write by using mount -uw:
mount -uw /


Next, a mount point will need to be created for the USB drive. For this example, the mount point will be created under /tmp/usb:

ls /tmp
mkdir /tmp/usb


Now it's time to mount the USB drive.The mount command will need the partition type (FAT32 or HFS), the disk to mount, and a mount point.

 
Mount the FAT32 USB drive on the unencrypted system
To mount the FAT32 drive on the unencrypted system the following syntax was used:

mount -t msdos /dev/disk1s1 /tmp/usb





Mount the HFS Drive on the encrypted system
To mount the hfs drive on the encrypted system, the following syntax was used:
mount -t hfs /dev/disk2s2 /tmp/usb





I always create a subfolder on the USB drive to hold the image. This way I can list the contents of the mount point as a sanity check to ensure that it mounted ok:

ls /tmp/usb



Here I can see "MacEncryptedImage" and "MacImage", the folders I created on the USB drive. Everything looks good to go.

Step 4 - Create the image

To create the image, the dd command can be used.  For dd, I use the options recommend on the Forensic Wiki page. The syntax looks something like this:

dd if=/dev/disk0 bs=4k conv=sync,noerror of=/tmp/usb/mac_image.dd
 
Lets break down this command:

  • if=/dev/disk0: this stands for input file. This will be the disk that requires imaging
  • bs=4K : this is the block size used when creating an image. The Forensic Wiki recommends 4K
  • conv=sync,noerror: if there is an error, null fill the rest of the block; do not error out


If /dev/rdisk is available this can be used instead of /dev/disk. rdisk provides access to the raw disk which is supposed to be faster then /dev/disk which uses buffering.

Unencrypted system
For the unencrypted system the image will be of /dev/disk0 to a FAT32 USB mounted drive. Since FAT32 has a 4GB file size limit, dd will need to be piped through the split command to keep the file size under 4GB:

dd if=/dev/disk0 bs=4k conv=sync,noerror | split -b 2000m - /tmp/usb/Images/disk0.split.




Encrypted system
For the encrypted drive, this example will be of /dev/rdisk1. Since the image will be saved to an HFS USB drive there is no need to split the image:

dd if=/dev/rdisk1 bs=4k conv=noerror,sync of=/tmp/usb/MacEncryptedImage/Mac_rdisk1.dd



Unfortunately, dd does not have a progress bar so patience is a virtue. Once it's complete, a message similar to below should appear:



View Image
As a last step, I just wanted to show how each image looked when opened in FTK Imager.


Unencrypted
The unencrypted image looks as expected, three partitions in an unencrypted state:




Encrypted
During my testing, I imaged both /dev/rdisk0 and /dev/rdisk1. /dev/rdisk0 was the entire disk with all three partitions. Opening the rdisk0 image in FTK Imager confirms that all three partitions are present. As expected partition 2, MacOSX, is showing as an unrecognized file system because it is encrypted:




The image of /dev/rdisk1 was an image of just the second partition, which is the MacOSX partition. Opening it up in FTK Imager confirms that /dev/rdisk1 is in a decrypted state:




So, in summary, here are the steps and commands covered above:

  • Use Command-S to boot into single user mode
  • Use ls /dev/disk* to determine the disk(s) to image
  • Plug in the USB Drive
  • Use ls /dev/disk* to determine USB drive device
  • Use mount -uw / to change internal drive to read/write
  • Use mkdir /tmp/USB to create a mount point
  • Use mount to mount the USB Drive
    • mount -t msdos /dev/disk1s1 /tmp/usb (for FAT32)
    • mount -t hfs /dev/disk2s2 /tmp/usb (for HFS)
  • Create disk image using dd 
    • dd if=/dev/disk0 bs=4k conv=sync,noerror | split -b 2000m - /tmp/usb/disk0.split. (FAT32 USB)
    • dd if=/dev/rdisk0 bs=4k conv=noerror,sync of=/tmp/usb/rdisk0.dd (HFS USB)

While these steps worked on my test Mac, examiners should always test and research the model they are encountering. I was limited to one test system, one hard drive and FileVault2 encryption. I also recommend trying this on a test Mac before running these steps on actual evidence. Single user-mode logs in as root, and this can be very dangerous.  Remember - Trust but Verify! :)









Mounting and Reimaging an Encrypted FileVault2 Mac Image in Linux

$
0
0
Before I continue my series on how to image Mac systems, I wanted to cover how to mount and work with FileVault2 encrypted Mac images. By "work with", I mean decrypt it and create an image of the decrypted volume in either raw (dd) or E01 format to pull into X-Ways, EnCase etc. To do this three things are needed:

1) A full disk image of the encrypted system in raw format (dd)
2) The SIFT Workstation  -  it has all the (free!) tools needed already installed
3) The password or recovery key for the volume.



For this example, I am going to use the encrypted disk image of a Mac I created from this previous turotiral. Below is what the encrypted image looks like in FTK Imager. Note that the second partition, MacOSX, is showing as an Unrecognized file system. This is because it is encrypted with FileVault2:


Another way to verify that the partition is encrypted is to look for the EncryptedRoot.plist.wipekey on the Recovery partition. In fact, we are going to need this to decrypt the drive, so I am just going to export out this file while I have it opened in FTK Imager. Mine was located under Recovery HD [HFS+]\Recovery HD\com.apple.boot.P\System\Library\Caches\com.apple.corestorage\EncryptedRoot.plist.wipekey:





If you're using SIFT in a VM, the first step is to create a shared folder(s) for where the image is located, and where you want your decrypted dd/E01 image to go. Here I have two USB drives shared as E: and G:. The E: drive contains my encrypted image and my EncryptedRoot.plist.wipekey file. The G: drive is where I am going to dump the unencrypted image. In Virtual Box  these settings were located under Settings > Shared Folders.



Next, I am going to make a mount point to mount the image:



Now I am going to change into the directory where I have my image and wipekey:


Joachim Metz has written a library, libfvde, to access FileVault encrypted volumes. I will be using fvdemount from this library to mount the encrypted partition. He has excellent documentation on his wiki  - I will pretty much be following that in the steps below.


I need to get the partition offset in bytes to pass to fvdemount. mmls from the Sleuth Kit can be used to get the offset from the image:


According to output above, the MacOSX encrypted partition starts at sector 409640. To get the offset in bytes multiply the offset (409640) times 512 bytes / per sector. I will need to pass this offset (-o), the EncryptedRoot.plist.wipekey (-e),the password or recovery key (-p), my image and the mountpoint to fvdemount:





fvdemount will create a device file named "fvde1" under the mount point. A quick file command confirms it is the HFS volume:



To further verify everything is unencrypted, fvde1 can be mounted as a loopback device to show the files system:


As shown above, I can now see the unencrypted Mac partition.

If your preference is to work with an image under Windows with tools like X-Ways, EnCase etc, an image can be taken of the unencrypted device, /mnt/Mac/fvde1.

For E01 format, ewfaquire can be used:

ewfaquire /mnt/Mac/fvde1


For raw (dd) format the following syntax can be used. I like to have a progress bar so I am using using the pv command with dd. For dd, I am using the recommend parameters from the Forensic Wiki.

dd if=/mnt/Mac/fvde1 bs=4K conv=noerror,sync | pv -s 999345127424 | dd of=/media/sf_G_Drive/Image/Mac_fvdemount_unencrypted.dd




-s is the size, which can be taken from the length of the partition, 1951845925 sectors * 512 bytes/sector = 999345113600 bytes (aprox. 1TB)
 
After the image completes, it can now be be opened and viewed in all it's unencrypted glory in the tool of your choice:


A Mac system can also be used to mount an encrypted volume. I may write a post about that at a later time. I know not all examiners have access to a Mac system, so I wanted to focus on this method first. Plus, I like good 'Ol Tux.


Cookie Cruncher Update, Timelines, Chrome Parser and more

$
0
0
I just wanted to pass on that I had a chance to update my Google Analytic Cookie Cruncher to support Firefox up to version 48. I can't believe it's been two years since I've updated the code!

I know I've said it before, but if you need me to update a tool to support a newer version of "X", please let me know - I'm happy to do so :) With everything else on my plate, I don't always have time to test each new browser for compatibility issues. Thanks to Heather Mahalik for reaching out to me with a student request to get it updated - sometimes I need that extra motivation.

I also updated my script that parses Google Analytics from Safari binary cookies. Mike O'Daniel reached out to me when the script crashed on him. Although he was unable to share the data due to privacy reasons, with a little back and forth trouble shooting we were able to determine what the issue was. He was parsing cookies from an iPad which contained URL encoded strings. None of my test data contained cookies formatted in this way and I did not have access to an iPad. Once the issue was fixed in the script he was off an running. Thanks to Mike for reaching out  to me to let me know that there were issues, and taking the time to help trouble shoot it since I was not able to replicate the issue.

I also wanted to push out a simple little parser for Chrome Internet History and Downloads. I recently spoke at the HTCIA conference  about mini-timelines (and even micro timelines). While this concept is nothing new, I have found this process to be invaluable during the cases I work. Harlan has blogged many times about the process and advantages of it, so I won't go into detail here. For the lab I taught, I just needed to output some basic Chrome Internet History into TLN format so I wrote a Chrome parser in python.

Now this tool does not show every single thing that is available in the Chrome History. I just stuck to the basic information: Visit time, URL, Hit Count etc. Sometimes too much information can cloud the timeline, making it difficult to pick out patterns of activity, or create so much noise the next lead gets lost in all the output.

I like the data in my timeline to be concise and clear. It reminds me a little of keyword searching. If the term is vague, you may be casting a wider net, but relative results could get buried in a million hits. It's going to take a lot of sifting to find that golden nugget. However, if you use a carefully crafted keyword, you can focus in on what it is you are looking for. Timelines are the same. Carefully picking the artifacts you want to add in to the timeline can help you hone in on relevant data quickly.

The other thing I wanted to discuss was Volatility plugins. I recently had the chance to run through a demo at a Python Meetup group on what Memory forensics is, and how Volatility can be used to analyze memory. As part of this, I "wrote" my first volatility plugin. Now, I say "wrote" because it was really just modifying a couple of lines in someone else's code to do something a little different.

Volatility has provided a nice interface to grab various keys from the registry. In fact, it reminds me of the way plugins are handled in RegRipper. If there is a key that you want that is not currently supported, look for a plugin that is similar and see if you can tweak it. It's a great way to start out, and as you tweak more and add a little bit here and there, you being to understand how things work.

I just started with something simple - pulling the computer name. This is just one key, with no binary data to convert:

HKLM\CurrentControlSet001\Control\ComputerName\ComputerName



I found another Volatility plugin that pulls a key from the system hive, shutdown.py - changed a few lines of code, and et voila! My first plugin. Ok - nothing earth shattering or difficult, but it's the first step in understanding how things work. That's often the way that I write many of my scripts - break it down into pieces, find code examples, and put it all together. Pretty soon I actually remember some of it, and my skill set advances.

The original code was written by Jamie Levy (@gleeda), and pulls the shutdown time from the registry. Below is a example of what I did. I just commented out what I didn't need, and modified what I did need.

While it may not be complex, it gets the job done and I learned something new in the process.

Mac Live Imaging: Functionality Versus Speed

$
0
0
My series on imaging a Mac would not be complete without covering how to do a live acquisition of a Mac. Now that FileVault2 appears to be the default during installs with Sierra, a live image may be very useful moving forward:


If a hard drive is encrypted, a live image will allow you to create a logical image of the partition in an unencrypted state. In my previous posts I covered how to image a Mac using single user mode and a Linux USB boot disk. I've put off doing this blog post because there is a very detailed and well written post by Matt at 505Forensics that covers this topic. In his blog post, Matt walks though step by step how to image a Mac using the FTK Imager command line tool for Mac OS X operating systems. As such, I wanted to cover how to do a live image using the dd command as another option.

Out in the field, I've found that it seems to take a longer time when using FTK Imager. I finally had a chance to do some testing and found that it took FTK Imager almost 2 hours to image a drive to a raw image (no compression). It took just 15 minutes using dd with an MD5. My test system was a MacBook Air, Early 2015, OS X El Capitan with a 75GB partition that was being imaged.

Using FTK command line has some distinct advantages over dd. There are options to compress the image, choose e01 format and supply case information.  However, if time and speed are an issue, dd may be a better option. For example, I've been onsite when 10 Macs needed to be imaged - dd was nice to use so we could finish up in time for dinner. If you can leave an image running overnight  - it's probably not as critical. See below for the test data:

FTK Imager: Total image time 1 hour, 49 min and 04 sec:




dd image with md5: 15 minutes



Please note - this testing is not by any means extensive (unlike the recent testing by Eric Zimmerman on some forensic software). I created several images using both methods and the image times listed above were about the same.

The first step is to run diskutil to see what the disk layout looks like and to determine what to image. I like to do this before I plug in my external USB. This makes it easier to see what drive needs to be imaged.

diskutil list


No FileVault2/No Encryption


My system has both OS X and Windows (Bootcamp) installed. As you can see /dev/disk0 is my physical drive. Partition 2 is the Machintosh HD and Partition 4 is the Windows aka Bootcamp partition. The logical, active device I want to image is /dev/disk1. As you can see in the screenshot above, it is listed as the logical, unencrypted volume and refers back to disk0s2. (If you do run across a system with Bootcamp you will probably want to grab that partition as well, but for the purpose of this blog post I am focusing on the Mac partition)

Below is a screen shot of what the same system looks like with FileVault2 turned on. Note that it says "Unlocked Encrypted". In this scenario, /dev/disk1 is logical volume I want to image.





Each /dev/disk has a corresponding /dev/rdisk:


rdisk is supposed to be faster than /dev/disk. As such, we are going to use /dev/rdisk1 instead of /dev/disk1 in the dd command.

Now would be a good time to plug in the external drive that will hold the image. On my system it auto mounted under /Volumes/<Device Name>

For dd, I am going to use the syntax suggested by the Forensic Wiki Page. The syntax looks something like this:

sudo dd if=/dev/rdisk1 bs=4k conv=sync,noerror of=/Volumes/MAC-Images/my_image.dd

 
Lets break down this command:

  • sudo: run as super user
  • if=/dev/rdisk1: this stands for input file. This will be the disk that requires imaging
  • bs=4k : this is the block size used when creating an image. The Forensic Wiki recommends 4k
  • conv=sync,noerror: if there is an error, null fill the rest of the block; do not error out

Better yet - let's add in an MD5 so we can have a hash of the image to make it more "forensicky". In order to do this:

dd if=/dev/rdisk1 bs=4k conv=sync,noerror | tee /Volumes/MAC-Images/my-image.dd | md5 > /Volumes/MAC-Images/my-image-md5.txt



According to the forensic wiki:
"The above alternate imaging command uses dd to read the hard-drive being imaged and outputs the data to tee. tee saves a copy of the data as your image file and also outputs a copy of the data to md5sum. md5sum calculates the hash which gets saved in mybgifile.md"
Try not to fat finger the password like I did though...

That's it! Happy imaging whichever tool you use.








Quicklook thumbnails.data parser

$
0
0

Earlier this year at the request of a reader I wrote a tool to parse the Quicklook thumbnails index.sqlite file. This sqlite database stores information related to thumbnails that have been generated on a system. This information includes filename, paths, timestamps and more (see my previous blogpost for more details).  The file is located under /private/var/folders/<random>/<random>/C/com.apple.QuickLook.thumbnailcache.

Someone else recently reached out to me and asked about the thumbnails.data file in the same folder, which holds the actual thumbnails. They were having issues carving the images out of this file.

Research

With a hex viewer, I opened a thumbnails.data file from my Mac and scrolled though the file. I didn't notice any typical image file headers as I scrolled. Below is a screen shot of what I was seeing:



Normally I would expect to see something like the following in hex view:


Here the file header shown in red is for a PNG file. I tried looking for the file headers for various other images as well, such as jpg, gifs and bmps but no luck.

I placed the com.apple.QuickLook.thumbnailcache folder on a wiped SD card and used a couple of carving programs to try and carve the images out of the thumbnails.data file. While the carvers were able to carve out the sqlite database, they did not find any images.

Interesting. I started to do some research, and I found a reference on this blog post that the images are stored as "raw bitmaps". "Raw bitmaps" arenot the same as .bmp files. Raw bitmaps do not have a file header or footer and can not be decoded without external information. According to this website  the following are characteristics of a typical raw bitmap:

  • No header or footer
  • Uncompressed
  • Does not use a color palette
  • Cannot be decoded without external information, such as:
    • Color type and sample order (RGB, BGR, grayscale, etc.)
    • Image width in pixels
    • Row padding logic
    • Byte order and/or bit order
This explains why the file carvers I used were not able to carve out the images. File carvers need a file header in order to identify and carve out files. So if we don't have a file header, how do we carve out these images?  Luckily, the Quicklook index.sqlite table stores the "external" information needed to carve the images in the thunmbnails.data file.

This external information includes the bitmap location in the file, the length of the bitmap, width, and height.



Manually Carving

Below is an example of how this data looks from the database. For this example, the file file3251255366828.jpg actually has two thumbnails associated with it. One the is 64 X 64, and a larger one that is 164 X 164:


I am going to walk through how to manually carve out the 64 X 64 thumbnail  using the information from the Quicklook index.sqlite database. While I have written a parser to automate these steps (covered further below), I think it's nice to know how to manually do it so you can validate any tools you may use to do this, or if a tool doesn't work.


The first thing is to open up the thumbnails.data file with FTK Imager using the  File >  Add Evidence Item > Contents of a Folder. To get to the file offset, choose the file, right click in the hex area, then choose Go to offset...  The offset we want is the value in the thumbnails table bitmapdata_location field, 993284:


FTK will take us to the file offset. Once this has been done, we need to select the next 16384 bytes - the value from the thumbnails bitmapdata_length field. To do this, we can right click and choose "Set selection length":


And then fill in the value from the thumbnails bitmapdata_length field:



Once this has been done, we can save the selection out to a file named "image.data":


Now that we have saved the bitmap out to a file - what do with do with it? Remember, it doesn't have a file header so just renaming it to .jpg or .png will not work.  So how do we view it? Gimp -  a free photo editing program has the ability to open up a raw bitmap and gives you the option to supply the width, height and image type.

Use Gimp File > Open and select the Raw Image Data type. Opening up the image.data file presents the following dialog box:


Notice how the image looks all funky? That is because we have to specify the correct values to render the image. The image type is RGB Alpha (which I determined from monkeying around), and the width and height are 64 (which comes from the thumbnails table width and height) Once these are entered, the image displays correctly:


The Script
Who wants to do this manually for each image? Like usual, python to the rescue. For this particular script, I used a python library called Tkinter. This library let me build a GUI app, in python, that works on multiple platforms! How cool is that? The script also works from the command line as well.


To use the GUI, simply launch the python script with no commands:



Just to prove it works, here are screen shots of the same script working on Linux and Mac (tested on Ubuntu 14.4 and Mac OSX):






Using the Gui is pretty easy - select the folder containing the com.apple.QuickLook.thumbnailcache, and a folder that will hold the outputs created by the script. The script will generate a report of the files and create a subfolder containing the images. The Excel option will generate an Excel spreadsheet with the images embedded in it.

The command line syntax is as follows for tsv output:

python quicklook.py -d "C:\case\com.apple.Quicklook.thubmnailcache" -o "C:\case\quicklook_report"



The command line syntax is as follows for Excel output:

python quicklook.py -d "C:\case\com.apple.Quicklook.thubmnailcache" -o "C:\case\quicklook_report" -t excel



In order to use the script, the biplist and Pillow library needs to be installed. biplist is a python library for binary plist files, and Pillow is used to work with images. Both libraries are easy to install.

To install biplist use easy_install:

Linux/Mac: sudo easy_install biplist
Windows: C:\<PYTHONDIR>\scripts\easy_install.exe biplist

To install Pillow:

Linux/Mac: sudo pip install Pillow
Windows: C:\<PYTHONDIR>\scripts\pip.exe install Pillow

The default output is TSV, however, if you would like an excel report the xlswriter python library needs to be installed:

Linux/Mac: easy_install xlswriter
Windows: C:\<PYTHONDIR>\scripts\easy_install.exe xlswriter 

I have also included a compiled Windows executable if you don't want to mess with installing all the libraries.

Download the quicklook parser

When Windows Lies

$
0
0
Wait, What? Windows lies? I believe so...

I worked a case where I checked the Windows Install date and it was a couple days before we received the system. GREAT....did the user reformat their drive and do a fresh install before handing over the laptop? Did they reinstall the OS? This would not have been the first time a laptop or system was rebuilt after an incident (either on purpose or by accident).

Checking basic information like the Operating System and Installation date can help a examiner prioritize the systems they need to examine and check for evidence spoliation issues. If you have 20 systems to go through, and the Operating System has been installed AFTER the date of the incident you may want to focus on some other systems first. In civil or criminal cases, an installation date right before you receive the evidence may raise some red flags.

So now that we have established why the Operating Installation date can be important, there is a registry key you can retrieve it from, HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion. RegRipper has a great plugin for this, named OSVersion that pulls not only the OS version, but the install date from the registry key . Running this against my test system I got the following output:

----------------------------------------
winver v.20081210
(Software) Get Windows version

ProductName = Windows 10 Home
InstallDate = Fri Feb  3 15:58:47 2017
----------------------------------------

What??? Install date of February 3rd, 2017?? 2 weeks ago??? Since this is my system, I know I did not install Windows 10 February 3rd. I have had Windows 10 Home since the roll out in 2015!

Just to be a thorough, I verified the date was being parsed correctly and looked at the raw data in the registry:
 



Install date is  0x5894a8b7 which is Fri, 03 February 2017 15:58:47 UTC.

Ok, one last check... running the systeminfo command it clearly shows "Original Install Date" as  2/3/2017 8:58:47 AM. I am UTC -7, so this information matches my above results.



OK - Windows, I think you are lying to me. I'm hurt. But, to add insult to injury, with some more digging around I find out that YOU MESSED WITH MY LOGS during this supposed install.

I have a snapshot of my system previous to the supposed install date of February 3rd, 2017. Note the created dates on the Event Logs - 12/12/2015 and a size of 20MB:


 Now I look at the current created dates and files sizes of my event logs. Note that the created date is the same of this supposed install date, 2/3/2017. Not only that, but my log files are much smaller, some about 2MB:


When I opened my logs, as expected, there are no entries before 2/3/2017, and the first entry matches with this supposed install date:


I was curious what may have caused this. Since Windows updates have caused issues with artifact timestamps before, such as USB devices, I checked the Windows Update history. Sure enough, there was a Windows update, "Feature update to Windows 10, version 1607" which ran on 2/3/2017.This date matches the supposed install date:



Since my update history contained more than one update ran on 2/3/17, I wanted to check some other Windows 10 systems to see what I could find out. I knew approximately when both of these systems  had their operating systems installed, and both had incorrect installation dates listed, as well as the Feature Update v. 1607 that ran on the same day.


System 1 (Windows 10 Home)
Registry Install Date: 9/27/2016 11:22:39
Event Log created Dates: 9/27/2016 11:11
Feature update to Windows 10, version 1607:  9/27/2016



System 2 (Window 10 Pro)
Registry Install Date: 10/1/2016 3:47
Log created Dates: 10/1/2016 3:42
Feature update to Windows 10, version 1607: 10/1/2016



So, my working hypothesis is that the Feature update to Windows 10, version 1607 is updating the Windows Installation time and deleting the logs.

This may just be a matter of semantics.. maybe "Operating Install Date" really means - latest major version update??? This artifact may be open to misinterpretation if that is the case.

Why is this important?

Possible Incorrect conclusion of evidence spoliation
Imagine you are working a civil case where the opposing side is supposed to produce and turn over a laptop. If you see the installation date was recent, you might incorrectly conclude that they installed the OS right before handing over the system. Or, in Incident Response you may incorrectly assume that the operating system was just installed and there may not be too many goodies to find.

 
You loose event logs
Event logs can make up a critical component of an exam. In the case I was working, this update happened right before I received the laptop. The event logs only had about a day in them. Yes, there may be backups, or a SIEM collecting them, but it just makes the exam more involved. 

Other Artifacts????
These are just the two inconsistencies I have found so far... there are probably  more...

I would like to test this more formally by setting up a virtual machine and tracking the updates to see what happens, however, based upon the systems I have looked at, I think Windows is lying.

As always, correlating findings against multiple artifacts could help determine if this install date is accurate.

Just a note and something else to be aware of - in many corporate environments the Operating System install date may be incorrect due to clones/images being used to  push out machines. However, I don't consider this as Windows lying because the date would reflect the install date of the original before it was cloned.


Onion Peeler: Batch Tor Lookup Program

$
0
0
Logs, Logs, Logs. I see, IPs. When reviewing log files for suspect activity it can be helpful to look up information related to IP addresses. There is a great utility for this by Nirsoft called IPNetinfo. You can import a whole list of IP addresses and it will give you "the owner of the IP address, the country/state name, IP addresses range, contact information (address, phone, fax, and email), and more."

When I am reviewing log files, an IP address associated with a foreign country may peak my interest. Another check I like to do is look for activity associated with Tor nodes. In a corporate environment, a user accessing a system from a Tor exit node may be a red flag.

When I am checking an IP address to see if it is associated with a Tor exit node I will use a website like ExoneraTor. It lets me put in an IP address and a date, and lets me know if the IP address is associate with a Tor relay. While this is a great tool, if I have a list of IP addresses to check, it's not very efficient. To that end, I wrote a little program to help automate the process of checking a list of IP addresses against Tor Relays and Bridges, Onion Peeler.

Onion Peeler is written in Python and uses OnionPy. OnionPy is a wrapper for the OnionOO Tor Api. Using OnionPy, Onion Peeler caches a local copy of the Tor exit nodes and performs a check for a list of supplied IP addresses. What's nice is that if you have a list of sensitive IPs, the information is not shared and is kept locally:


It will output a list of matches:




Since it's in Python, the program is cross-platform compatible. I've tested it on Windows, Linux and Mac. It just requires OnionPy, which can be installed using "pip install OnionPy". I also have a compiled Windows Executable if you don't have Python installed. It requires an Internet connection as the initial query grabs the latest Tor nodes from OnionOO. I am thinking about adding in a way to store an offline copy in the next version as well as add in additional details about the Tor nodes (first seen, last seen etc.)

It took about a minute to check 8,000 IP addresses. Of course, a bigger list will take longer, so be patient.

Code and program are available on my github.

Finding and Decoding Malicious PowerShell Scripts

$
0
0

PowerShell. It's everywhere. I've started coming across more and more malicious PowerShell scripts.
Why do attackers love using PowerShell? Because it's native to many versions of Windows, provides full access to the WMI and .Net Framework and can execute malicious code in memory thereby evading AV. Oh yeah - did I mention a lack of logging too?

During the course of my analysis on these types of cases, I have found several indications that PowerShell has been utilized by an attacker. These include installed services, registry entries and PowerShell scripts on disk. If logging is enabled, that can provide some nice artifacts as well. The perspective of my post is going to be from that of an analyst that may not be too familiar with PowerShell. I am going to discuss how I locate malicious PowerShell artifacts during my analysis, as well as some methods I use to decode obfuscated PowerShell scripts. This will be Part 1 in a 3 part series written over the next few weeks.

Part 1: PowerShell Scripts Installed as Services
First up to bat is my favorite - PowerShell scripts that I find as installed services in the System event log. To find these, one of the first things I do is look for Event ID 7045. This event occurs when a service is installed on a system.  An example of a PowerShell script installed as a service is shown below:



Of note are the following red flags:

1) Random Service Name
2) The Service File Name has "%COMSPEC%", which is the environment variable for cmd.exe
3) A reference to the powershell executable
4) Base 64 encoded data

So how might an entry like this make its way into an event log? While there are various ways to do this, one method would be to use the built in Windows Service Control Manger to create a service:

sc.exe create MyService binPath=%COMSPEC% powershell.exe -nop -w hidden -encodedcommand <insertbase64>

sc start MyService

The above commands create a service named "MyService" and uses the binPath=  option to launch cmd.exe which in turns executes the PowerShell code.  

An interesting thing to note - there may be some failed errors logged after the service is created in this manner. The errors do not mean that it was unsuccessful. Windows was just expecting a "real" service binary to be installed and "times out" waiting for the "service" to report back. How do I know this? In my testing I was able to set up a successful reverse shell using the above methodology, which generated a failed service error on the Windows machine. On the left is a Metaspolit session I started on an attack virtual machine. On the right is a Windows 7 host virtual machine. Although the Windows 7 machine states "The service did not respond to the start or control request in a timely fashion," a reverse shell was still opened in the Metatsploit session:



Below are the two corresponding event log entries, 7000 and 7009, made in the System event log. Although the 7009 message states "The FakeDriver service failed to start.." this does not mean that the command inside the binPath variable did not execute successfully. So beware, interpreting these as in indication that the PowerShell did not executemay be false:




The 7045 System event log PoweShell command is encoded in base64 and python can be used to decode it. Interesting note - this base64 code is in Unicode, so there will be extra parameter specified when decoding it. (For display reasons I have truncated the base64 text - you would need to include the full base64 text to decode it):

import base64
code="JABjACAAPQAgAEAAIgAKAFsARABsAGwASQBtAHAA...."
base64.b64decode(code).decode('UTF16')

Here is what the decoded PowerShell command looks like. A quick sweep of the code reveals some telling signs - references to creating a Net Socket with the TCP protocol and an IP address:


This is similar to the type of code that Meterpreter uses to set up a reverse shell. The above PowerShell code was pretty easy to decode, however, it's usually more involved.

Next up is another example - this time its just "regular" base64. Note again the %COMSPEC% variable and reference to powershell.exe:


Again, Python can be used to decode the base64 encoded PowerShell:
 


This time, the decoded output is less than helpful. If we go back and take a look at the System event log entry more closely, we can see that there are references to "Gzip" and "Decompress":


Ahh.. so thinking in reverse, this data may have been compressed with Gzip then encoded using base64. Using python, I am going to write out the decoded base64 into a file so I can try unzipping it:

import base64
code="H4sICCSPh1kCADEAT..."
decoded=base64.b64decode(code)
f=open("decoded.gzip",'wb')
f.write(decoded)
f.close

Using 7zip I am successfully able to unpack the gzip file! Since I did not get any errors, I may be on the right track:



Now if I open the unzipped file with a text editor, hopefully I will see some PowerShell code:



Ahh..what??? Ok - time to take a peek in a hex editor:


Not much help either. I am thinking this may be shellcode. As a next step, I am going to run it through PDF Stream Dumper's shellcode analysis tool, scdbg.exe:



Ta-Da! scdbg.exe was able to pull out some IOCs for me from the shellcode.

To summarize, here are the steps I took took to decode this PowerShell entry:
  • Decoded the base64 PowerShell string
  • Wrote out the decoded base64 to a zip file
  • Decompressed the Gzip file using 7zip
  • Ran the binary output through scdbg.exe
As demonstrated above, there can be several layers to get though before the golds strikes.

One final example:


This looks familiar. First step, decoding the Unicode base64 gives the following result - which contains more base64 code inside the base64 code! :


Obfuscated, then obfuscated again with compression. This is very typical to what I have seen in cases. This time because there is no reference to "gzip" in the compression text, I am just going to save the second round of base64 to a regular zip file and try to open again with 7zip:

decoded2="nVPvT9swEP2ev+IUR...."
f=open("decoded2.zip,"wb")
f.write(base64.b64decode(decoded2)
f.close()

When trying to open up the zipped file with 7Zip I get an error:


And the same with the built in Window's utility:


I also tried various python libraries to unzip the compressed file. After some research, I discovered that the compression used is related to some .Net libraries. Now, since I am a python gal, I wanted to figure out how to decompress this using Python so I could easily implement it into my scripting. Since Python is cross compatible with Linux, Windows and Mac, .Net is not native to its core. As such, I used Iron Python to do my bidding. (Now yes, you could absolutely use PowerShell to decode this, but what can I say - I wanted to do it Python)

According to the Iron Python website"IronPython is an open-source implementation of the Python programming language which is tightly integrated with the .NET Framework. IronPython can use the .NET Framework and Python libraries, and other .NET languages can use Python code just as easily." Neat. Installing it on Windows is a breeze - just an MSI. Once installed, you simple run the scripts calling ipy.exe (I'll show an example later).

Armed with this, I was able to write some python code (io_decompress.py) that decompressed the zip file using the python IO compression Library:

#import required .Net libraries

from System.IO import BinaryReader, StreamReader, MemoryStream
from System.IO.Compression import CompressionMode, DeflateStream
from System import Array, Byte
from System.IO import FileStream, FileMode
from System.Text import Encoding
from System.IO import File

#functions to decompress the data
def decompress(data):
    io_zip = DeflateStream(MemoryStream(data), CompressionMode.Decompress)
    str = StreamReader(io_zip).ReadToEnd()
    io_zip.Close()
    return str

print "Decompressing stream..."
compressedBytes = File.ReadAllBytes("decoded2.zip")
decompressedString = decompress(compressedBytes)

f = open("decompressed.txt", "wb")
f.write(decompressedString)
f.close()

To run the script using IronPython was easy: ipy.exe io_decompress.py:


I was able to open the decompressed.txt file created by the script and was rewarded with the following plain text PowerShell script. Once again, note the IP address:


To summarize the steps taken for this event log entry:
  • Decoded Unicode base64
  • Decoded embedded base64 code
  • Decompressed resulting decoded base64 code
As we have seen from the three examples above, there are various techniques attackers may use to obfuscate their PowerShell entries. These may be used in various combinations, some of which I have demonstrated above. The steps taken vary for each case, and within each case itself. I usually see 2-3 variations in each case that are pushed out to hundreds of systems over the course of several months. Sometimes the steps might be:base64, base64,decompress, shellcode. It might also be: base64, decompress, base64, code, base64, shellcode. See how quickly this becomes like a Matryoshka doll? When I wrap up the series, I will talk about ways to automate the process. If you are using something like Harlan Carvy's timelines scripts to get text outputs, it becomes pretty easy.

So how to go about finding these and decoding them in your exams?

  • Look for event log ID 7045 with "%COMSPEC%, powershell.exe, -encodedcommand, -w hidden , "From Base64String" etc.
  • Look for "Gzipstream" or "[IO.Compression.CompressionMode]::Decompress" for hints on what type of compression was used
  • Try running the resulting binary files through sdbg.exe, shellcode2exe or other malware analysis tools

Part 2 will be about PowerShell in the registry, followed by Part 3 on PowerShell logging and pulling information from memory.



How to mount Mac APFS images in Windows

$
0
0
APFS is the new file system for Mac OS, and so far, many forensic suites are playing catch up as far as support goes. As such, workarounds may need to be employed in order to conduct analysis on Mac OS APFS images. This short blog post will cover one of those workarounds -  mounting an APFS image in Windows.

Paragon has a free (preview) driver to mount APFS volumes in Windows!!!! Sweet!!!

APFS for Windows is going to look for a connected APFS drive. Since we have an image, we will need to mount the image as a SCSI device so the Windows APFS driver can see it. To do this, we will use Arsenal Image Mounter.


Mount the image using Arsenal Image Mounter. I had to select the sector size of 4096 for it to work since the sector size in my image was 4096 (If you need to know the sector size of your image, you can use a tool like mmls to check).



Download and install APFS for Windows from Paragon and launch it. It should automatically detect the APFS volume:



Now you can browse the APFS drive in Windows:



And add it to your favorite all in one tool, like X-Ways, as a logical drive:





Happy Hunting!

Mounting an APFS image in Linux

$
0
0
As a follow up to my post on how to mount AFPS images on Windows, I wanted to post about how to mount an APFS image on a Linux system. If you are looking for how to mount an APFS image on a Mac, Sarah Edwards wrote a awesome blog post on how to do this. There is also another one over at BlackBag.If you are new to APFS, I would also recommend an informative video by Steve Whalen where he explains APFS in detail.

Options, options, options. It's always nice to have options in forensics. Sometimes one way may not work for you, or maybe you don't have access to a Mac at the moment. If you are on a Windows machine and need access to an APFS volume or image (E01 or raw), it's easy enough to spin up a Linux VM and get to work.

For my testing, I used an experimental Linux APFS driver by sgan81 - apfs-fuse. Note the word "experimental" - and read the disclaimers by the author. I would strongly recommend verifying any results with another tool or method, such as the one detailed by Sarah Edwards. However, this method works in a pinch, and at least you can start analysis until you get things working on a Mac. Oh - and according to the documentation, it will prompt you for a password if the volume is encrypted.

These instructions assume that you already have an image of the Mac, either in E01 or raw format (dd, dmg, etc). For my Linux distro, I used the free SIFT Workstation Virtual Machine on Ubutnu 16.04.  If you are using another Linux distro, you may need to install additional dependencies, etc.

Preparing the SIFT Workstation

First things first, some dependencies need to be installed before apfs-fuse will work. As always, run sudo apt-get update before installing any dependencies:

sudo apt-get update
sudo apt-get install libattr1-dev

If you are running a version of SIFT prior to the one based on Ubuntu 16.04, a couple of additional dependencies may be needed. This includes a newer version of cmake. This can be installed by following the instructions on the cmake website. In addition to cmake, older version of SIFT may also need the the ICU library:

sudo apt-get install libicu-dev


Download and build apfs-fuse

Next, download the apfs-fuse driver from github:

git clone https://github.com/sgan81/apfs-fuse

Now compile it, and install it:

cd apf-fuse
mkdir build
cd build
cmake ..
make

Mounting the E01 Image

Now that the SIFT workstation has been set up, we can mount the E01 image. If you have a dd/raw image, you can skip to the next step.

I like using the ewfmount tool in SIFT to mount E01s. Once mounted, there will be a "virtual"  raw image of the E01 file under the designated mount point. The syntax is simple, and works on split images as well (just specify the first segment for split images).

syntax:
ewfmount <image name> <mount point>
example:
ewfmount mac_image.E01 /mnt/ewf



If you have issues with ewfmount, check out this blog post for some alternative tools to mount ewf files.

Mounting the raw image to a loopback device

Now that we have a dd/raw image to work with  - either from mounting the E01, or because that is how the image was taken - we'll mount it to a loopback device. The Linux apfs-fuse driver needs the volume where the APFS container is. Because the disk image may contain additional partitions, we will need to figure out the offset where the APFS partition begins.

Below is a screen shot in X-Ways. Here was can see that X-Ways identified an APFS partition starting at sector 76,806 as well as 4096 bytes per sector (note, although X-Ways identified the partition as being APFS, it did not parse it out).





Alternatively, we can use the Sleuthkit tool mmls to list the partitions on the image. Here was can see that there is a "NoName" partition that starts right after the EFI System Partition. The offset is 76806 and is the largest partition on the drive. The Units are also displayed as 4096 bytes per sector:

To run mmls on the mounted EWF:
mmls /mnt/ewf/ewf1
To run mmls on a dd/raw image:
mmls mac_image.dd


To set up the looback device, we will need to supply the APFS starting partition offset in bytes. Since the offset is given in sectors, we will need to convert from sectors to bytes by multiplying 4096 bytes/per sector times  the number sectors:
4096 X 76806 =  314597376

Armed with this information, we can mount the "NoName" partition, aka the APFS partition, to a loopback device:

For the mounted EWF file:
losetup -r -o 314597376 /dev/loop0 /mnt/ewf/ewf1
for the dd/raw image:
losetup -r -o 314597376 /dev/loop0 mac_image.dd


In the syntax above, -r is read only, and -o is the offset in bytes to the start of the APFS partition.

Mount up the APFS filesystem

Ok! Finally! Now we are ready to mount up the APFS partition to the filesystem. The apf-fuse binary will be in a folder name "bin" within the build folder created earlier when the apfs-driver was installed. Change into that directory, and run apfs-fuse by pointing it to the loopback device and a mount point:

mkdir /mnt/aprs
./apf-fuse /dev/loop0 /mnt/apfs




In my testing, the cursor just blinks and does not give a status message. I opened another terminal  and did an ls command on the mount point to see if it mounted ok:


Success! Now I can run AV Scans, view files, and export out any files as needed.

As I mentioned before - this is an experimental driver and all results should be verified. Hopefully as time passes we will have more ways to mount and access APFS images in Linux, and our mainstream tools.

Malicious PowerShell in the Registry: Persistence

$
0
0
This is the second part in my series on Finding and Decoding Malicious PowerShell Scripts. My first blog post walked through how to find malicious PowerShell scripts in the System event log, and the various steps to decode them. In this post, I wanted to discuss another location where malicious PowerShell scripts might be hiding - the Registry.

The Registry is a great place for an attacker to establish persistence. Popular locations for this are the Run keys located in either the Software Hive, or in a User's ntuser.dat hive. For a list of run keys, check out the Forensic Wiki.

A technique I've seen in some cases I've worked is an attacker using PowerShell in the Run key to call another key that contains the base64 code that contains a payload.

Let's see what an example of this looks like. Using Eric Zimmerman's Registry Explorer I've navigated to the following registry key: HKLM\Software\Microsoft\Windows\CurrentVersion\Run. Underneath the value "hztGpoWa" the following entry is made:

 

You can also use Harlan's RegRipper's soft_run plugin to pull this information:

rip.exe -r SOFTWARE -p soft_run

 Output:


(for the NTUSER.DAT hive, use the user_run plugin)

So what does this command do? %COMSPEC% is the system variable for cmd.exe. This uses cmd.exe to launch PowerShell in a hidden window. It then uses the PowerShell command  "Get-Item" to get another registry key - HKLM:Software\4MX64uqR, and the value Dp8m09KD under that key.

Browsing to the HKLM:Software\4MX64uqR key in Registry Explorer reveals a whole mess of base64:


Another way to pull base64 like this from the registry is to use the "sizes" plugin from RegRipper. This will search the registry hive for values over a certain threshold and dump them out:

 rip.exe -r SOFTWARE -p sizes

(A thanks to Harlan for updating this plugin! Make sure to update it if you haven't recently.)

To see the detailed steps of how to decode this base64, take a look at my earlier blog post on decoding malicious PowerShell scripts.

Here are the high-level steps to decode it:
  • Decode unicode base64 in registry key
  • Decode and decompress (gzip) embedded base64 
  • Decode another round of embedded base64
  • payload = shellcode
  • Try running scdb.exe or strings over shellcode for resulting IP address and port
The resulting code more often than not is a way to establish a Meterpreter reverse shell.

Another way to find instances of malicious PowerShell in the registry is to search the registry for "%COMSPEC%".

I used  Registry Explorer and it's handy Find command to do this. Make sure and have the right "Search in" boxes selected:


While this example showed registry keys and values with random names - this is not always the case. These names can be whatever the attacker wants and they will not always be an obvious tip off like a random name.

For my example, I used Metasploit to install this persistence mechanism in the registry. Check out all the options available. As mentioned above, the registry key/value names may be set to anything:


My next post on malicious PowerShell scripts will cover PowerShell logging and pulling information from memory. Happy Hunting!

Triage Collection and Timeline Generation with KAPE

$
0
0
As a follow up to my SANS webcast, I wanted to post detailed instructions on how to use KAPE to collect triage data and generate a mini-timeline from the data collected. As much as I hate to say "push button forensics", once you get KAPE up and running, it really is only a matter of a couple of clicks and you are off to the races.

I won't go into detail here on the benefits of collecting triage data or timelining (of which there are many!), but instead focus on how to set up KAPE to do it. If you would like more details on the above, please watch my webcast.

To get the timelining to work in KAPE you will need to do three things to get it set up. These will each be detailed in this post:

1) Download/Upgrade KAPE
2) Grab the timeline Targets and Modules
3) "Install" the executables called by the KAPE modules I wrote

As KAPE gets updated, I expect step #2 to drop off as it will be rolled out with the newer versions.

KAPE Basics


KAPE (Kroll Artifact Parser and Extractor) is a free tool written by Eric Zimmerman, and available for download on the Kroll website. From the website: 
"KAPE is a multi-function program that primarily: 1) collects files and 2) processes collected files with one or more programs. KAPE reads configuration files on the fly and based on their contents, collects and processes relevant files. This makes KAPE very extensible in that the program’s author does not need to be involved to add or expand functionality"
To this end, I have written a target that defines what files to collect to create a timeline, and about 20 modules that tell KAPE how to process the data - AKA - make a timeline.

In order to do this, you will need to grab the new target and timeline modules and the binary files that the modules call.

Step 1 - Download/Update KAPE

If you don't have KAPE, download KAPE from here.

If you already have KAPE, you will need to have version 0.8.6.3 or greater. To update KAPE, run the Get-KAPEUpdate.ps1 PowerShell script in the root of the KAPE directory.

Step 2 - Grab the Timeline Modules and Targets

The Targets in KAPE define what files will be collected. The Modules define what executable will be ran against the files that are collected. 

To grab the latest Targets and Module from github, run gkape.exe and click the "Synch with GitHub" at the very bottom of KAPE. This will get you the latest Targets and Modules.


The timeline modules I created should be in the \Modules\timelining subfolder. If you do not have this folder after syncing (Eric was working on implement the syncing of module subfolders at the time of this blog post) you will need to grab the timelining folder directly from github: 

Step 3 - Grab the executables

The timelining modules will call specific executable to run against the targets. For example, if we want to parse out the eventlogs,  the program EvtxEcmd.exe is called by KAPE to parse the artifact. The executables are placed in the KAPE bin folder:



For the timeline modules, here are the executable you will need to download, and the locations where they go under the bin folder:

MFTeCmd
Purpose: Parse $MFT file
Instructions: should already be in \modules\bin. Tested with version 0.4.4.4

EvtxECmd.exe
Purpose: Parse out *.evtx event logs
Instructions: should already be in \modules\bin\EvtxECmd folder. Make sure you have version 0.5.1.0 or newer as the older version will not work for timelining.


Regripper v 2.8
Purpose: Run regripper plugins against various registry hives
Instructions: Make folder \modules\bin\regripper. Place rip.exe , "p2x5124.dll" and the plugins folder in the regripper folder:




Harlan Carvey Timeline Tools
Purpose: parse event logs, timeline registry hives, convert timeline formats, etc.
Instructions: Create the folder \modules\bin\tln_tools. Place bodyfile.exe, evtparse.exe, parse.exe, regtime.exe and p2x5124.dll in the \tln_tools folder.

Mari DeGrazia Timeline Tools
Purpose: Convert between file formats
Instructions: Place in the tln_tools folder created previously



Step 4 Generate that timeline!
Once you have the the targets,modules and executables set up, you can generate a timeline. 
First, run the Target options in KAPE to grab the triage data. This can be done on a mounted image, or an external USB drive attached to the system of interest. You could even use something like F-Response to run KAPE to do a remote collection.

Check "Use Target Options"
Target Source: The drive letter you want to collect the files from. On a mounted image, this would be the drive letter that the image is mounted as. For a live collection, this would most likely be C:
Target Destination: This would be where you want the files copied to. Most likely an external drive for a live collection, or a folder on your analysis computer for a mounted image.
Targets: Select MiniTimelineCollection
Hit execute when ready:



KAPE will now copy all the required files and place them into the Target Destination folder.

Now that the targets are collected, the modules will need to be run against the collected files. 

Select "Use Module Options"
Module Source: This will be where the files were copied to by KAPE 
Module Destination: The will be where the resulting timeline will be created
Modules: Select Mini_Timeline and if desired, Mini_Timeline_Slice_by_Daterange. The daterange will give you a smaller timeline with a specified date range.
Variables: Add two variable by using the Key and Value fields. computerName and dateRange. computerName will be the name of the system you are analyzing. The dateRange will be the dateRange you want the smaller timeline to have. This has to be in the format mm/dd/yyyy-mm/dd/yyyy.
Once completed, select execute:




After running, you will have two timelines created in CSV format. The other files created are temporary working files created during the process. These CSV files can be opened using a text editor, TimelineExplorer, Excel, or any other CSV tool of your choice. If you want some more details on how to do the timeline analysis, and where to get started with analysis of these files watch my webex towards the end. 



I highly recommend reading Harlan Carvey's blog posts on timelineing, as KAPE is just a way to automate this process.

Here is a detailed breakdown of what the timeline targets and modules I created are doing:

Targets collect the following files:
$MFT
Registy hives (SAM,SECURITY, SOFTWARE, SYSTEM, NTUSER.DAT, UsrClass.dat)
Event logs (*.evt, *.evtx)

Timeline Modules will include the following in the timeline:
File MACB timestamps
Last write times of the above registry hive's keys
ReRipper plugins ran: muicache, userassist, AppCompatCache, Services
Event Logs with Event ID and descriptions


For more detailed information on KAPE, including how to write modules and targets, check out the KAPE documentation.

Detecting Lateral Movement with WinSCP

$
0
0
RDP is a common way for an attacker to move laterally within an environment. Forensically, when an attacker uses RDP we can use artifacts such as shellbags, link files and jumplists on the remote system to see what was accessed while the attacker was RDPed into the system.

Another way an attacker can access a system remotely is to use a program called WinSCP.  Using
WinSCP, they can browse folders and files on a remote system, copy folder and files back to the system they are currently on, and even search the remote system for files!



The scenario I am going to focus on here is one where the attacker has already compromised a system on the network, and is using WinSCP to browse to other computers on the same network. In this case, they could browse to HR systems looking for tax information, Severs looking for databases or Workstations looking for IP data. (Note - TLDR at the bottom)

Because they are not using the Windows Explorer shell, this leaves very little artifacts on what they were doing on the remote system in comparison to RDP. Basically, they get a browse for free card. They can even open up remote documents from within a WinSCP text editor.

An argument may be made that RDP is available by default on Windows systems while  FTP/SSH is not. Well, guess what. Starting with Windows 10 1809 and Sever 2019 it is part of the "optional features " that can be easily installed on Windows. In fact, a simple PowerShell command can be used to install it. And, on top of that, it automatically creates a firewall rule and adds an SSH user. How thoughtful!

powershell Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
powershell Start-Service sshd

And as a bonus, add the command to have the service start up automatically:
powershell Set-Service -Name sshd -StartupType 'Automatic'

It is not uncommon for an attacker to follow the below steps once they have breached a network:

1) Dump admin credentials
2) Enumerate systems to get IP addresses/Hostnames
3) Push out PowerShell scripts to all systems en-mass that do things like disable firewalls, install backdoors and disable antivirus.

It's a simple task to add in one more command to install SSH and now boom - all these systems are now accessible to connect to using WinSCP.

Oh - and did I mention that WinSCP comes with a portable version? The portable version makes it easy for an attacker to download and use. Many blog posts reference a registry key that contains settings for WinSCP. However, the portable version does not store settings there.

So - now that we know WinSCP can be used in this manner, what artifacts can we find forensically to help determine what was done on both the "staging" system and the remote systems?  I did some testing on some Windows 10 1909 machines to see what artifacts were left behind using the Portable version of WinSCP, v.5.17.

WinSCP Client System Artifacts


Most of the artifacts related to WinSCP are going to be on the host where it was run. Running WinSCP generates many of the common artifacts seen with file execution: Prefetch, shimcache, amcache, userassist etc. However, the artifacts "for the win" will be the WinSCP.ini file and the SRUM database.

WinSCP.ini file

WinSCP.ini is a text file that contains configuration settings. It will be located in the same directory as the WinSCP.exe file.  At the end of a WinSCP session, the user is promted to save their workspace:





Even without saving the wokspace, WinSCP saves valuable information in the WinSCP.ini file that can be useful to the investigation. This includes systems connected to,usernames, places on the local system where files were saved to from the remote system and the last path that was accessed on the local system. Examples of each of these configuration sections are below:

Systems connected to: 

[Configuration\CDCache]
ItSupport@169.254.249.229=412F433A2F55736572732F<SNIP>
mdegrazia@169.254.171.129=412F433A2F55736572732F<SNIP>
ITSupport@DESKTOP-PV2TN0G=412F433A2F55736572733D<SNIP>


Folders where files have been saved:

[Configuration\History\LocalTarget]
0=C:%5CUsers%5CCrashOveride%5CDocuments%5CExfil%5C*.*
1=C:%5CUsers%5CCrashOveride%5CDocuments%5CSystem3%5C*.*

Last folder opened on the local system:

[Configuration\Interface\Commander\LocalPanel]
DirViewParams=0;1|150,1;70,1;120,1;150,1;55,0;55,0;@96|5;4;0;1;2;3
StatusBar=1
DriveView=0
DriveViewHeight=100
DriveViewHeightPixelsPerInch=96
DriveViewWidth=100
DriveViewWidthPixelsPerInch=96
LastPath=C:%5CUsers%5CCrashOveride%5CDocuments%5CExfil 


If the session settings are saved,  you get a bonus section called Sessions, with the saved session name. The default is "My Workspace" . This saves the last local directory and remote directory, along with a password. Check out https://github.com/winscp/winscp/blob/master/source/core/Security.cpp for information on the password encryption.

[Sessions\My%20Workspace/0000]
HostName=169.254.44.249
UserName=ITSupport
LocalDirectory=C:%5CUsers%5Cmdegrazia%5CDocuments%5CSystem3%5CW2s
RemoteDirectory=/C:/Users/Acid%20Burn/Documents/W2s
IsWorkspace=1
Password=A35C435B9556B1237C2DFE15080F2<TRUNCATED>

The WinSCP.ini file appears to be updated when the session closes. As such, using the last modified date of the WinSCP.ini file with a prefetch timestamp could give you an idea of how long the last session was.

As you can see by the information above, looking at this .ini file can help an examiner determine what an attacker may have been browsing to on a remote system, and what they may have saved on the local system, even if it was subsequently deleted.

SRUM Database  

The SRUM database collects information every hour on network usage on a per application basis. To get an idea of how much data may have been copied/downloaded using WinSCP it can be an excellent resource. Parsing the SRUM database with SRUM Dump by Mark Bagget shows that a high amount of data was transferred using WinSCP:


As demonstrated above, if you suspect WinSCP was used, parsing out the database can provide some details on how much data was transferred, what user account was associated with it, and the time frames that it occurred. Beautiful!


WinSCP Remote System Artifacts

There are several things you can look for on a remote system to determine if WinSCP was used to browse it: Event log entries, evidence of OpenSSH being installed and file system timestamps. Note - in my example and for my testing I installed OpenSSH which is part of Windows. WinSCP can use other FTP/SSH servers to connect to. Keep that in mind if you suspect WinSCP may have been used - your artifacts may vary.

OpenSSH artifacts

As mentioned previously, in order for WinSCP to connect to a system, an FTP or SSH sever must be running to accept the connection. Look for artifacts indicating these services exist. For OpenSSH, look for c:/Windows/System32/OpenSSH/sshd.exe, SSHD.exe prefetch files, and the sshd.exe service. Timestamps associated with these entries may help determine the first time the attacker used it to connect. When I installed OpenSSH, it also created a user account, which can be located in the SAM hive (shown here parsed with RegRipper):

Username        : sshd [1003]
SID             : S-1-5-21-1445295406-4253784506-242647837-1003
Full Name       : sshd
User Comment    :
Account Type    :
Account Created : Sun Feb 23 06:48:08 2020 Z
Name            : 
Last Login Date : Never
Pwd Reset Date  : Sun Feb 23 06:48:08 2020 Z
Pwd Fail Date   : Never
Login Count     : 0
  --> Password does not expire
  --> Normal user account

 

Event Log Entries

As expected there is an Event ID 4624 associated with the WinSCP client login. The login is a type 5 with the account name sshd_1860 and the domain of  VIRTUAL USERS, and the process of sshd.exe:




This is followed by an entry in the OpenSSH Operational event log that records the connecting IP address and account used by WinSCP to connect:


File Timestamps

Once logged in, the attacker can use WinSCP to effectively browse through folders, and even open up files via WinSCP leaving very little trace on the remote system. During testing, I noticed that an indication this was occurring was that accessed dates were changed on folders and files clicked on or copied. However, access dates are NOT a reliable artifact to use when drawing conclusions and must be used with other corroborating artifacts.

Below is an example of files and folders that were copied. The "Teslacam" folder was copied, which results in the access dates of all the copied files to be updated on the remote system:


SRUM Database
Once again, the SRUM database really shines here to know if something is amiss. Looking at the SRUM database and sorting by "Bytes Sent" shows a large amount of data being sent during this time frame by the application sshd:




<TLDR>

So, in summary. WinSCP can be used by attackers as an alternative to RDP. The use of WinSCP to access systems in an environment appear to leave a smaller footprint than using RDP. Now that SSH can be easily installed into Windows 10 and Windows Server 2019, I anticipate we may see WinSCP being used more in network breach cases to move around laterally within the environment.

Look for the WinSCP.ini file on the host system and the SRUM datbase. For the remote system, look for Event IDs 4624 related to ssh clients/servers and Application logs for FTP/SSH severs. Check the SRUM database for data transfers related to ssh clients. Once a timeframe is known, check for large amounts of files that have last access timestamps in the same timeframe (but you know the drill with last access dates - be very careful using these)

Sources/References:

WinSCP: https://winscp.net/eng/index.php
Installing OpenSSH on Windows: https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse
SRUM dump : https://github.com/MarkBaggett/srum-dump
Viewing all 40 articles
Browse latest View live