email phishing

Browser-Based Threat Alert: Iranian Government Actors Mimic Think Tank for Targeted Phishing Attacks

Secureworks Counter Threat Unit researchers published results from an investigation into suspected Iranian government-linked actors targeting researchers who document the suppression of Iranian women and minority groups. According to the report, the actors appear to be associated with APT35, a group suspected of operating at the behest of Iran’s Islamic Revolutionary Guard Corp (IRGC).

As with most Advanced Persistent Threat (APT) activity, the techniques utilized in these operations were meticulous and highly-targeted, relying on extensive knowledge of the targets and personalized, persistent social engineering attacks. The attackers established credible social media accounts purporting to belong to members of the Atlantic Council, an American international affairs think tank.

Specifically, Secureworks researchers investigated one of the Twitter accounts used in the operation, purportedly belonging to a “Sara Shokouhi”. Upon reaching out to the account, the actors provided legitimate information as bona fides, claiming to be a colleague of a named Atlantic Council Senior Fellow. However, the supposed colleague publicly denied working with “Shokouhi” and the photos used on the Twitter profile were taken from the Instagram account of a Russia-based tarot card reader.

This profile follows a history of using similar techniques, where APT35 actors routinely mimic actual Atlantic Council employees to gain the trust of their targets and abuse that trust for further attacks or intelligence collection.

The Sara Shokouhi persona was used to contact multiple targets, all consistent with typical targets of the IRGC. The interactions were informed, intentional, and well-choreographed. They were designed to gradually build the victim’s trust. In many cases, APT35 actors initiated a series of benign interactions over time using email, social media, and other online forums. The benign interactions included sending legitimate links to the targets so they became accustomed to clicking links provided by the actors. Eventually, however, the actor would send the target a malicious link that would lead to them downloading malware or providing credentials to phishing sites.

Extreme Lengths to Abuse Trust

Criminal actors normally rely on high-volume attacks with generic phishing messages, hoping that even though a small subset of their messages will get through and an even smaller number will fool victims, the sheer volume will ensure enough successful attacks to make the effort profitable. APT actors use an opposite strategy. Because only a small number of people have the valuable information they are looking for, and because they have vast resources by virtue of their government backing, they can afford to play the long game and sink a large amount of resources into attacks on specific individuals. 

This means that, unlike typical generic mass phishing attacks, the social engineering is personalized and can take advantage of specific characteristics of the target. These attacks don’t bear the hallmarks most security training teaches people to look for, like poor grammar or typo squatted domains.

Your Browser Shouldn’t Trust Anyone

No matter how well people are trained, and no matter how vigilant they might be to attempts to phish them, in the end security comes down to fallible human judgment and trust. That’s why it’s imperative that organizations adopt zero-trust security models that inherently distrust that which users trust. That’s why we developed ConcealBrowse. Regardless of how trustworthy your users think a URL might be, ConcealBrowse scans each one using state-of-the-art intelligence and our proprietary threat model, along with computer vision to identify and block phishing attempts and malware downloads.

You can experience the power of our Zero-Trust at the Edge security model today by requesting a free ConcealBrowse trial, or by scheduling a demo with our team of experienced security professionals.

Written by: Conceal Research Team