The online experience can be affected by spammers attempting to boost interest in trending topics and attack bots trying to impersonate human users. Generally, these impersonator bots are up to no good; what can be done about them?
Bots – applications programmed to run automated tasks – are playing an ever-increasing role on the internet. In our recent study at Imperva Incapsula, which examined traffic to 100,000 domains on our service, we saw that bots accounted for 51.8 per cent of all website visitors. Often concealed Online, but increasingly more active, today they are the true majority on the internet.
View the infographic and full report here.
This is both a blessing and a curse for us, the internet citizens. On the one hand, it reflects a considerable amount of activity from good bots that enrich the online experience. These are the search engines crawlers, social media applications and even various marketing tools that we use to interact with websites and services.
On the other hand, the report also emphasises the high degree of bad bot activity – a result of widespread hacking campaigns, spam assaults, click fraud and the launching of distributed denial of service (DDoS) attacks. At 28.9 per cent of all website visits, bad bots activity made up the majority of non-human online traffic.
Most bad bots are impersonators – attackers operating under the guise of legitimate site visitors to bypass security measures. It’s these bots that I want to talk about here.
Impersonator bots – seeing through their mask
In 2016, impersonator bots accounted for 84 per cent of all malicious bot activity, a 23 per cent increase over the previous five years. This isn’t surprising since, by appearing and behaving like legitimate site visitors, they represent the latest step in the evolution of bad bots. In Terminator terms, they are to other bots what the T1000 is to the T800.
With that said, not all impersonator bots are created equal, nor are they equally impressive. Rudimentary impersonators are relatively unsophisticated bots programmed to hide behind fake user-agent headers – ID tags that identify a visitor to an application.
What’s more, many of the attacks launched by impersonator bots rely on their ability to mimic regular human behaviour.
One example is ticketing bots, which are programmed to bypass fair purchasing rules to buy large numbers of tickets to an event. To succeed in their mission, they interact with a website just as a human visitor would, only at a much higher rate – making mass purchases in just a few seconds.
The impact of these bots is so significant that a few months ago the US Congress passed its first bot-related legislation: the BOTS (Better Online Ticket Sales) Act of 2016. Still, ticketing bots are relatively harmless compared to the real villains – bots that can hack your site, steal your data or take down your server. Combating these, and other automated offenders, is a challenge for us and others in the cyber security industry.
Impersonator bots go social
However, impersonator bots are not only a problem for security experts and website owners. In recent years, they also expanded their reach to social media and especially Twitter, which for various reasons proved to be fertile ground for their activity.
Most recently this was illustrated by the discovery of the so-called Star Wars bot ring, named after the Star Wars quotes the bots consistently tweeted. The ring consisted of more than 350,000 bots that were thought to be used for malware propagation, spam attacks and the creation of fake trending topics.
The impersonators masked themselves by avoiding interaction with each other, tweeting at human-like rates and building an organic-looking network of friends and followers. As a result, they appeared human enough to bypass Twitter’s security measures for more than three years before being discovered and uprooted.
This story is a great example of how impersonator bots are programmed to mimic human behaviour. On Twitter, as on websites, a huge mass of these bots are moving behind the scenes and shaping our online experience.
Countering the impersonator bot threat
As impersonator bots become more sophisticated and their share of the online traffic pie continues to grow, countering their threat has taken on a greater significance. Here, effective protection starts by acknowledging that every website is a potential target.
The practice of bot filtering is fairly complex, even by security industry standards. Broadly speaking, cyber security experts today are using a number of methods to weed out bad bots with the intention of collecting and cross-referencing various signals. When combined and weighed against each other, they’re able to create an accurate visitor profile.
While it’s clear that impersonator bots are here to stay, these methods and others can be used to keep them at bay. And as the war between bot operators and security whitehats wages on, I can only be certain of one thing—bots will continue to evolve and security measures will continue to improve in a perpetual game of cat and mouse.
Concerns over brand security are escalating
Data breach has affected nearly a quarter of UK consumers – research report
Small creative firms risk reputation by underestimating cyber attack impact
Please register below to unlock this article.
An email will be sent to you with your membership details.
Leave your thoughts