Stampa questa pagina

When Crime Pays: Why Algorithms Reward Dishonesty

Guido Donati* 27 Ago 2025

 

 

 

We live in a paradoxical era. While the law clearly states that crime should not pay, the economic and technological system we are immersed in seems to have overturned this principle. Digital platforms, which dominate our communication and access to information, have become the stage where dishonesty, in some cases, is not only tolerated but actually rewarded.

Take the case of an online scam, where an imposter exploits a celebrity's name to attract investors with the promise of easy gains. This content, despite being blatantly false, generates a wave of interactions: clicks, comments, and shares. The platform's algorithm, designed to maximize attention, does not distinguish between true and false. It only sees engagement and, as a result, pushes the fraudulent content to an even wider audience.

Thus, two actors profit from a criminal act. The scammer gets rich by deceiving victims, while the platform earns from the advertising generated by the traffic on that content. In this system, profit is tied not to quality or truthfulness but to the ability to generate buzz and interaction. This is where the profound contradiction lies.

The Algorithm That Looks Away: The Cynical Calculation of Profit
This reward mechanism is not a flaw in the system but a direct consequence of its business model. The biggest risk is that platforms, not being publishers, tend to turn a blind eye to illegal content, especially when it generates significant revenue. The calculation is simple and cynical: the potential profits from a harmful content's virality outweigh the costs and penalties resulting from its spread.

When an authority or a user reports fraudulent content, the platform knows that by acting quickly, it will reduce its own traffic and, consequently, its revenue. The ethical dilemma is clear: preserve the integrity of the system and user safety, or continue to maximize profits? Recent history shows that, too often, the choice falls on the second option. It's like a bar owner who ignores the illegal activities inside because they attract customers.

The Case of Misinformation: Not Just Scams
The problem is not limited to financial fraud. The perverse mechanism that rewards viral content at the expense of truthfulness also has devastating consequences in other areas.

Public Health: During the COVID-19 pandemic, misinformation prospered online. Platforms that hosted videos and posts about the supposed dangers of vaccines or miracle cures saw an enormous increase in traffic. Although this content was scientifically unfounded and dangerous, algorithms promoted it for its ability to generate intense discussions and emotional reactions, which were often stronger than those for boring but correct scientific information. Those who spread these "news" gained fame, views, and sometimes direct earnings.

Politics: Elections are often fertile ground for misinformation. Content that slanders a candidate, even if false, can go viral and influence public opinion, generating huge traffic on the platform. The algorithm doesn't care about the democratic harm, but about the number of shares the content generates, promoting it further.

Platform Responsibility: Between Immunity and Obligation
The platforms' defense has always been based on the concept of being a "service provider." They claim they are not responsible for the content published by their users. But when a company actively chooses to reward the visibility of harmful content, isn't it acting as a publisher? The question is no longer whether they should be held accountable for what others publish, but for what they themselves choose to promote.

The most pragmatic solution might lie in shared responsibility.

A Way Out: Hitting the Profit
To break this vicious cycle, society must send a clear message. A concrete action could start with the most obvious and less ambiguous cases, such as scams.

A law could be proposed that sanctions platforms with a fine of at least double the profit obtained from fraudulent content that has caused actual economic harm. Obviously, this economic sanctioning approach can also be extended to all cybercrimes that involve any form of human exploitation.

This approach has a double advantage:

Economic Deterrent: The cost of dishonesty would outweigh the profits. Platforms would have a very strong financial incentive to invest in prevention instead of merely a posteriori removal.

Justice and Recourse: The platform, in having to pay for the damages, would in turn have the right to seek legal recourse against those who created the false content, thus becoming an ally in the fight against online crime.

Only by making traffic generated by dishonesty economically disadvantageous can we hope to restore the fundamental principle of justice: crime does not pay.

Essential Bibliography
P. D. Hall, J. R. A. Farris. The Social Dilemma: An Interdisciplinary Analysis of Digital Ethics and Social Media. (2020)

Shoshana Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, (2019)

European Union. Digital Services Act (DSA).

R. A. Posner. Law, Pragmatism, and Democracy. Harvard University Press, (2003)

 

* Board member, SRSN (Roman Society of Natural Science)

 

Vota questo articolo
(0 Voti)