Impact of AI on mental health
In 1854, Henry David Thoreau coined the term “brain rot,” criticizing society’s tendency to devalue complex ideas; although in ancient times, the term referred to living a simple life in his book Walden. Currently, it refers to “the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging. Also: something characterized as likely to lead to such deterioration.”
Countries like the United States and Mexico spend an average of 3.30 hours per day on social media, which equals 1,277.5 hours per year, that is, about two full months each year.
Social media algorithms such as Instagram, TikTok, and Facebook (META) are managed by the artificial intelligence systems of each corporation. However, as users, we are the ones who feed these algorithms and decide what type of content we want to consume. This results in each video becoming more engaging than the last, creating a dependency on the short-form videos that dominate social platforms.
But why does it benefit the content industry for us to consume this format? For large entertainment corporations, the greater the number of users who remain connected, the greater the profit since more interaction leads to higher revenue. By consuming low-quality videos or content that does not challenge our intellect, a kind of collective ignorance is generated, leading us to avoid questioning the information we consume.
In the early years of social media (2000–2010), platforms were primarily seen as spaces for connecting with friends. Users shared personal experiences, and these networks were perceived as relatively harmless environments for self-expression and identity exploration. Since social media was still new, its reach was limited, and usage was neither constant nor addictive.
Between 2010 and 2018, social media experienced the major boom that defines it today. Its purpose expanded beyond simply forming friendships around the world. Gradually, it evolved into not only a personal space but also a professional one, where individuals and businesses began building personal brands. During this period, a person’s success and perceived value increasingly became measured by likes and followers, promoting an idealized and often unattainable lifestyle centered on perfection.
This evolution brings us to the present era (2020–2025), where the complexity of algorithms directly impacts users’ mental health. Content ecosystems are no longer limited to interactions among friends; instead, the vast reach of social media has created a collective need to belong to communities where success and fame are central goals. As a result, awareness of negative mental health effects such as anxiety, cyberbullying, and addiction has grown significantly.
Moreover, the ease of creating accounts on platforms like TikTok, Instagram, and Facebook, as well as accessing artificial intelligence tools such as Gemini, ChatGPT, and Claude, has amplified the spread of misinformation. At the same time, these technological advances have opened the door to new forms of digital harm, including virtual sexual exploitation and identity theft. Today, publishing personal images online carries the risk of manipulation, alteration, or misuse, generating growing uncertainty and concern about digital safety.
On the other hand, we have the different uses we have given to AI, a use that has grown exponentially in recent years. It is a reality that most people have implemented the use of artificial intelligence mainly as a facilitator for different assignments; nevertheless, what is truly concerning is that we have stopped thinking so that a machine can do it for us. From requesting the complete writing of university essays to asking for mental health diagnoses by typing our feelings from our phones. Although the existence of a platform with vast information to resolve any type of doubt may sound appealing, this has caused us, as a species, to turn off our common sense. Many times, these types of actions are perceived as harmless, and even fascinating to a certain extent, but what is truly worrying comes when we confuse reality with our expectations. Not knowing how to restrict the use of AI can lead us to isolate ourselves from the real world, promoting mental health conditions such as depression and anxiety, while at the same time losing our cognitive effort and setting aside our critical, logical, analytical, and sequential thinking. This is how brainrot is promoted.
As a relatively new phenomenon, the full scope of AI is not yet known. It is a field we have not yet fully explored, and it will take years to understand it. Because it is a tool that learns constantly, it is difficult to predict where it will ultimately end. However, we do know that several European Union countries, as well as China, Japan, and Australia, have attempted to regulate the use of AI regarding the use of personal data, machine learning, biometric data recognition, and large language models, such as OpenAI’s GPT. They argue that it represents an imminent danger to users or, failing that, to the services that various companies offer through this rapidly growing technology. In Mexico, efforts have been underway since 2023 to legislate modifications to the Federal Penal Code related to AI-generated content used to commit crimes. More recently, in 2024, the Supreme Court of Justice of the Nation (SCJN) ruled that content created entirely by artificial intelligence without human intervention will not be eligible for copyright protection. However, Claudia Llanos, legal director of the Coordination for Linkage and Technology Transfer at UNAM (National Autonomous University of Mexico), points out that “when there is substantial and demonstrable human participation, its eligibility for copyright protection can be evaluated.” While this issue has not yet been formally addressed in legislation, the role of AI in the creation of works continues to be assessed.
A case that exemplifies the above is the misuse of the artificial intelligence Grok, owned by the social network X, run by billionaire Elon Musk, which has been embroiled in various controversies. In recent months, it has been revealed how ‘system prompts’ work—that is the instructions this AI has to respond to certain situations. These prompts typically limit interactions with users, but in this case, a security flaw in the system allowed any user with access to edit them. This resulted in certain personalities configured under the names “crazy conspirator” and “unhinged comedian,” designed to create disturbing and extreme content. This illustrates the psychological impact of the misuse and distribution of artificial intelligence, which can lead to social risks that promote erratic behavior and even sexually explicit content. Furthermore, the lack of parental controls and age verification by these applications increases children’s exposure to violent and inappropriate content, which is, in fact, a violation of children’s rights.