The Cloud Consultancy Europe Ltd.
+44 (0) 203 637 6667 [email protected]

Almost as soon as email became widely used, crooks and scammers began using it as a means to defraud people.

In today’s world, malicious fake emails continue to be a huge problem for individuals and businesses.

Businesses make lucrative targets
Losses due to Business Email Compromise (BEC) scams are escalating, and criminals are targeting organisations with emails that, more often than not, foil conventional email security solutions because they do not carry malicious payloads or links.

The problem stems from the fact that it’s easy to spoof senders or compromise email accounts.

It is also compounded by the fact that, especially in the business environment, users get a lot of emails, focus and attention fluctuates throughout the day, and attackers are always trying out new tricks to get past defenses put in place by companies.

Historically, most email protection vendors, specialists, and anti-spammers assumed that simply ‘setting the right rules’, ‘refining the engine’, signature-based scanning and blacklist references are enough to stop the vast majority of the threats that come through email.

That strategy is not enough as clients then risk being tricked by the unexpected threats. And these threats are not necessarily new, but simply variations on a theme that an organization might not expect.

Constant evolution of threats
Attackers are perpetually tweaking the emails with which they hope to snare targets.

They employ spoofing and urgency tactics in different iterations to achieve different ends: phish credentials, deliver malware, steal data or money.

Emails are made to look like standard, legitimate payment requests, invoices, document delivery emails, alerts urging “account verification” because emails ostensibly can’t be delivered, urgent requests apparently coming from their colleagues and superiors, and so on.

Taken together, these represent complex iterations of threats that steepen the learning curve for most filters, because they defy the basis of inbound email having what is on its surface a malicious link, attachment, or even any threat whatsoever beyond a request for a reply to an email.

In a ‘benign conversation-starter’ case, the sending of the email is unlikely to have been automated as the display name was a specific high-authority user within the organisation.

It all points to a highly targeted attack that relies on the user getting duped into thinking they’re talking with the actual person. This is an attempt to start an email conversation which, after the first reply, will likely request some data, a fund transfer, or the sharing of a malicious link. With the reply, the likelihood of filtering is again reduced.

Choosing the right solution
Fortunately, more and more providers are now taking the approach that analysis of unique and individual threats by human experts is necessary to understand the real angles of attack individuals and groups use to achieve fraudulent goals.

This leads to unique real sets that adapt and evolve along with threats, rather than simply trying to ‘outrun’ them with perfect, broad based, pure-machine filtering.

While programmatic intelligence is necessary to filter the vast majority of email threats, granular threat analysis is also required by email protection vendors to continue adapting to the clever tactics of email fraudsters.