A while ago I wrote a script to perform what I called poor man’s forensics. The script was meant as a way to utilize the native operating system to extract some minimal data from exotic filesystems to be able to create a timeline and identify possible abnormalities. As a reminder to myself here are some additional raw notes, commands and resources on performing (forensic || incident response || compromise assessments) investigations on ZFS / Solaris environments. I encountered ZFS / Solaris during some of the FoxCert investigations I participated in.
These raw notes are by no means complete and you must definitely not follow these blindly and always ensure you are working on a copy of a copy of a copy of the real evidence.
Creating disk images
There are several methods, not all of them have been properly tested yet. There are two type of devices under solaris block and raw (character) devices, they can be distinguished due to the ‘r’ in front:
/dev/dsk = block device /dev/rdsk = character device
You can use all known methods like hardware imagers or a write blocker before using DD.
Analyzing the disk image
Let’s assume that you have created or received a raw dd image from a disk containing the ZFS filesystem. You usually want to find the partitions, under linux you can do that using partx for example, which should also exist under Solaris. This is only important if you are dealing with an image taken from an x86 system. When it are images from a SPARC machine, these can be mounted without the need of figuring out the partition/slice layout. The following commands are all performed on a SOLARIS machine. This due to the fact that open source implementations only support zpool version 28 or lower (at the time of writing).
Let’s mount the image first:
lofiadm -a /path/to/dd/image
The above command will probably return something like ‘/dev/lofi/1’ which is the loopback on which the image has been mounted.
Optional: Should only be needed for x86 based systems
With the image mounted and with no partx available you could use the following commands to view some rudimentary info:
fdisk -R -G /dev/rlofi/1 fdisk -R -v -W -G /dev/rlofi/1
The most important switch is ‘-R’ since that indicates read only. Unfortunately most tools under Solaris will not know how to interpret the dd image, so you will need to extract the partition first after you know the sector offset on which it starts (remember to do: sector offset * block size). After you’ve extracted the partition you can mount it on the loopback again:
lofiadm -a /path/to/extracted/partition
Which should now return something like ‘/dev/lofi/2’, it also automatically creates a ‘/dev/rlofi/2’ in case you need the raw character device.
You now have a ZFS partition mounted on a loopback device, so you can start to investigate it. For a partial ‘low level’ investigation of the ZFS structure you can use the ‘ZFS debugger’ zdb
zdb -l /path/to/dd/image zdb -l /dev/rlofi/1 zdb -uuuu -l /dev/rlofi/
The first command will only display the label and the second command will also display the uberblocks. The other information that you want to have is the ability to actually look at the filesystem:
View the information about the ZFS pool and the state:
zpool import -o readonly=on -d /dev/lofi/2
Actually imports the pool to an alternate location instead of root ‘/’. Additionally make sure that the mount points are not automatically mounted ( -N). When you are dealing with zpool instances that consists of multiple devices, all the devices should be mounted on the local loopback. Also make sure that the alternate root doesn’t exist, zpool will take care of this by itself.
zpool import -o altroot=/<name> -N -o readonly=on -d /dev/lofi <pool_name>
If errors occur like ‘I/O error’ or it states that the zpool is corrupted you can add the ‘-F’ or ‘-FX’ flag to automatically attempt to fix it. Be aware however that the last couple of transactions will be permanently lost due to this. The reason is that it throws it away until it finds a state that is not corrupted.
Now that the zpool is imported you can view the mount points:
zfs list
Now you can mount all the mount points manually, this is important because it’s the only way to see which mount points will be mounted and which ones will give errors. The following commands illustrate an example mounting session, the idea is that you start with the ROOT and work your way from there. An important thing to note is that in these commands the “zfs mount -O” is used. This is an overlay mount which makes the data that was already mounted invisible. So always make sure that you first list the data on the mount point BEFORE you overlay another mount point on it:
zpool import -o readonly=on -o altroot=/<name> -d /dev/lofi/1 -N -f <pool_name> zfs mount <pool_name>/<mountpoint_name> zfs mount -a zfs mount -O <pool_name>/<mountpoint_name> zfs mount -a
Destroy(remove/unmount from system) the pool after you are done
zpool destroy <poolname>
The important flag in this case is ‘-o readonly-=on’ to prevent any writes to the ZFS partition, although this is not yet verified to be 100%.
Memory images
One of the ways to create a memory image is by using the savecore command, but you need to ensure a couple of things:
- dumpadm
- The output should show if ‘all’, ‘kernel’ or ‘process’ memory is dumped
- dumpadm -c ‘all’
- This changes the settings to make sure all memory is dumped (default is only kernel on Solaris 10)
- dumpadm -z off (optional)
- Disable the compression of the memory dump (default is compressed)
- savecore -v -L
- make a memory dump and saves it to the default location
- ‘-v’ shows any errors that occur during the creation of the memory dump
- ‘-L’ forces Solaris to create a memory snapshot of the running system
- savecore -v -L [new location]
- ‘[new location]’ if you want to save the output to a different location
- recommended if you don’t want to overwrite data on the current disk
Importing ZFS snapshots read only
- Everything must be performed as root
- Make sure that a pool is available to work with and that it has enough space
- zpool list –> check sizes
- zpool status –> check if online
- zfs list –> check which file systems are already available
- create a zfs file system on the datapool
- zfs create [datapoolname/hostname]
- Enable read only
- zfs set readonly=on [pool/filesystem]
- Check if change is committed
- zfs get all [poolname]
- Start to receive the snapshot
- zfs receive -Fv [poolname/filesystem] < snapshot_file
- check what has been imported, not everything will be mounted (normally just top level)
- zfs list
- Mount the missing dirs, start with top level first (don’t forget to mount the root dir)
- zfs set mountpoint=[unmounted dir] [datapool/filesystem/mountpoint]
- zfs get all [poolname/filesystem]
Importing problematic ZFS snapshots
These are snapshots that give mount errors when importing, for example because the first mount point creates a read only mount
- mkdir /path/to_desired_mountpoint
- mkdir /zones/tempsnap
- zfs receive -o mountpoint=legacy -Fv datapool/tempsnap < tempsnap.zfs
- Now the important part is to figure out where the root is, usually it’s tempsnap/ROOT/actual_root_naam
- mount -F zfs datapool/tempsnap/ROOT/actual_root_name /zones/tempsnap
- Now also mount the rpool part by doing
- mount -F zfs datapool/tempsnap /zones/tempsnap/rpool
- From now on you can mount the rest of the mount points on the correct dirs
- If you are unsure what the root is compare it to the actual rpool
- zfs list | grep rpool
Export mounted ZFS filesystems over NFS
Since for some odd reason the online commands don’t work with our imported snapshots, the following does work (Solaris 11 on x86):
- First share on ZFS level
- zfs set share.nfs.sec.sys.root=* datapool/tempsnap
- zfs set share.nfs=on datapool/tempsnap
- Now share on NFS level
- share -F nfs -o root /zones/tempsnap
- Now mount from a remote machine
- sudo mkdir /mnt/zfs_mount
- sudo mount <ip>:/zones/tempsnap /mnt/zfs_mount
- Verify everything works
- sudo find /mnt/zfs_mount | grep denied
- Should not have any output
- sudo find /mnt/zfs_mount | grep denied
Broken zpool stuff
- Display non-imported zpools
- zpool import
- Force import of the zpools (beware to maintain original devices)
- zpool import -f [zpool name]
Verifying ELF signatures
In principle all the executables made by Sun/Oracle should be signed. So during the investigation you can leverage this to filter out all the executables of which the signature is still valid using the following commands:
find /<pool_mount_dir>/ -type f -exec file {} \; | grep 'ELF ' > / output.txt
for i in $(cat /output.txt);do elfsign verify $i 2>&1;done >> / elfverify_output.txt
You can then further filter the list into a list with “good” files and a list with “bad” files.
Tools
- encase
- zdb
- mdb
- https://github.com/max123/mdbzfs/blob/master/rawzfs/common/rawzfs.c (modified version)
- zfs-fuse (linux, does not support all features)
Resources
- www.giis.co.in/Zfs_ondiskformat.pdf
- zfs on disk format
- www.osdevcon.org/2008/files/osdevcon2008-max.pdf
- where is my data, zfs internals
- www.osdevcon.org/2009/slides/zfs_internals_uli_graef.pdf
- more zfs internals
- http://www.eall.com.br/blog/?page_id=588
- ZFS internals series
- www.dfrws.org/2009/proceedings/p99-beebe.pdf
- Digital Forensics implications of ZFS
- www.csnc.ch/misc/files/publications/solaris_evidence_gathering_v1.2.pdf
- live solaris evidence gathering
- http://www.joyent.com/blog/zfs-forensics-recovering-files-from-a-destroyed-zpool
- http://mbruning.blogspot.nl/2008/08/recovering-removed-file-on-zfs-disk.html
- recovering deleted files
- https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
- Nice overview of several ZFS aspects
- https://blogs.oracle.com/mmusante/entry/seven_years_of_good_luck
- In depth explanation of ‘zpool split’
Some offline files, just in case the source goes down:
Thank you for sharing this. It really helped me. Just a small thing to add – as it appears, to get a mountable disk image, one should dd-copy a slice, e.g:
dd if=/dev/rdks/c0t0d0s0 of=disk.img bs=128k
The last part of the name s0 indicates a slice.