Authors: Arevalo Ricardo / Cortes Cruz Jaydee / Galindo Danna / 
Mercado Torres Ligia / Orta Tadeo

February 27th, 2026
 
 
Introduction

In recent years, technological advancements in various fields have grown exponentially, and Artificial Intelligence (AI) has emerged as a watershed moment. Since 2020, the use of AI has diversified, allowing us to solve basic tasks, to the point where its application in the medical field is being considered to improve patient care, reduce waiting times, and even enhance treatments for some diseases. But how much could the unchecked use of AI affect human health? And how much could it improve quality of life when used appropriately?

Impact of AI on mental health
 

In 1854, Henry David Thoreau coined the term “brain rot,” criticizing society’s tendency to devalue complex ideas; although in ancient times, the term referred to living a simple life in his book Walden. Currently, it refers to “the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging. Also: something characterized as likely to lead to such deterioration.”
Countries like the United States and Mexico spend an average of 3.30 hours per day on social media, which equals 1,277.5 hours per year, that is, about two full months each year.
Social media algorithms such as Instagram, TikTok, and Facebook (META) are managed by the artificial intelligence systems of each corporation. However, as users, we are the ones who feed these algorithms and decide what type of content we want to consume. This results in each video becoming more engaging than the last, creating a dependency on the short-form videos that dominate social platforms.
But why does it benefit the content industry for us to consume this format? For large entertainment corporations, the greater the number of users who remain connected, the greater the profit since more interaction leads to higher revenue. By consuming low-quality videos or content that does not challenge our intellect, a kind of collective ignorance is generated, leading us to avoid questioning the information we consume.
In the early years of social media (2000–2010), platforms were primarily seen as spaces for connecting with friends. Users shared personal experiences, and these networks were perceived as relatively harmless environments for self-expression and identity exploration. Since social media was still new, its reach was limited, and usage was neither constant nor addictive.
Between 2010 and 2018, social media experienced the major boom that defines it today. Its purpose expanded beyond simply forming friendships around the world. Gradually, it evolved into not only a personal space but also a professional one, where individuals and businesses began building personal brands. During this period, a person’s success and perceived value increasingly became measured by likes and followers, promoting an idealized and often unattainable lifestyle centered on perfection.
This evolution brings us to the present era (2020–2025), where the complexity of algorithms directly impacts users’ mental health. Content ecosystems are no longer limited to interactions among friends; instead, the vast reach of social media has created a collective need to belong to communities where success and fame are central goals. As a result, awareness of negative mental health effects such as anxiety, cyberbullying, and addiction has grown significantly.
Moreover, the ease of creating accounts on platforms like TikTok, Instagram, and Facebook, as well as accessing artificial intelligence tools such as Gemini, ChatGPT, and Claude, has amplified the spread of misinformation. At the same time, these technological advances have opened the door to new forms of digital harm, including virtual sexual exploitation and identity theft. Today, publishing personal images online carries the risk of manipulation, alteration, or misuse, generating growing uncertainty and concern about digital safety.
On the other hand, we have the different uses we have given to AI, a use that has grown exponentially in recent years. It is a reality that most people have implemented the use of artificial intelligence mainly as a facilitator for different assignments; nevertheless, what is truly concerning is that we have stopped thinking so that a machine can do it for us. From requesting the complete writing of university essays to asking for mental health diagnoses by typing our feelings from our phones. Although the existence of a platform with vast information to resolve any type of doubt may sound appealing, this has caused us, as a species, to turn off our common sense. Many times, these types of actions are perceived as harmless, and even fascinating to a certain extent, but what is truly worrying comes when we confuse reality with our expectations. Not knowing how to restrict the use of AI can lead us to isolate ourselves from the real world, promoting mental health conditions such as depression and anxiety, while at the same time losing our cognitive effort and setting aside our critical, logical, analytical, and sequential thinking. This is how brainrot is promoted.
As a relatively new phenomenon, the full scope of AI is not yet known. It is a field we have not yet fully explored, and it will take years to understand it. Because it is a tool that learns constantly, it is difficult to predict where it will ultimately end. However, we do know that several European Union countries, as well as China, Japan, and Australia, have attempted to regulate the use of AI regarding the use of personal data, machine learning, biometric data recognition, and large language models, such as OpenAI’s GPT. They argue that it represents an imminent danger to users or, failing that, to the services that various companies offer through this rapidly growing technology. In Mexico, efforts have been underway since 2023 to legislate modifications to the Federal Penal Code related to AI-generated content used to commit crimes. More recently, in 2024, the Supreme Court of Justice of the Nation (SCJN) ruled that content created entirely by artificial intelligence without human intervention will not be eligible for copyright protection. However, Claudia Llanos, legal director of the Coordination for Linkage and Technology Transfer at UNAM (National Autonomous University of Mexico), points out that “when there is substantial and demonstrable human participation, its eligibility for copyright protection can be evaluated.” While this issue has not yet been formally addressed in legislation, the role of AI in the creation of works continues to be assessed.
A case that exemplifies the above is the misuse of the artificial intelligence Grok, owned by the social network X, run by billionaire Elon Musk, which has been embroiled in various controversies. In recent months, it has been revealed how ‘system prompts’ work—that is the instructions this AI has to respond to certain situations. These prompts typically limit interactions with users, but in this case, a security flaw in the system allowed any user with access to edit them. This resulted in certain personalities configured under the names “crazy conspirator” and “unhinged comedian,” designed to create disturbing and extreme content. This illustrates the psychological impact of the misuse and distribution of artificial intelligence, which can lead to social risks that promote erratic behavior and even sexually explicit content. Furthermore, the lack of parental controls and age verification by these applications increases children’s exposure to violent and inappropriate content, which is, in fact, a violation of children’s rights.

Effects of data centers on public health
 

In recent years, generative Artificial Intelligence (AI) has transformed multiple sectors, and the field of medicine is no exception. Thanks to advances in algorithms, machine learning systems, and large-scale data analysis, it is now possible to improve diagnostic accuracy, optimize treatments, and enhance hospital management efficiency. These AI systems run off a data center, where most people imagine rows of servers quietly storing emails, photos, and website data. These are traditional data centers, built primarily to process everyday digital activity: streaming, banking, cloud storage, and business operations. They rely mostly on CPUs, operate at moderate power densities, and are designed for reliability and steady workloads.
AI data centers, however, are fundamentally different.
Rather than simply storing and serving information, AI facilities are built to train and run large-scale machine learning models. These centers rely heavily on GPUs and specialized accelerators that perform billions of calculations simultaneously. The result is an enormous concentration of computing power and energy demand. AI data centers can use several times more electricity per rack than traditional facilities and often require advanced liquid cooling systems to manage the intense heat they generate.
The benefits are significant. AI infrastructure enables climate modeling, medical research, drug discovery, infrastructure optimization, and automation across industries. In many cases, AI can even improve efficiency in energy systems, water management, and transportation, potentially reducing emissions elsewhere in the economy.
However, the environmental trade-offs are serious. With the rising demand for generative AI, it is projected that AI data centers will continue to consume increasing amounts of energy in the following years (Hao, 2025). The United States is projected to grow from 25 gigawatts (GW) in 2024 to more than 80 GW in 2030, approximately three times the increase (Green et al., 2024). This increase would require more electricity than what is currently available in the United States. This may imply outsourcing to different countries and building more data centers both in the U.S and other regions. As a result of this projection, a large implication for public health grows.
According to OpenAI CEO Sam Altman, most of these data centers will be using fossil fuels to power their infrastructure; however, it was found that contracts for coal plants have been extended to explicitly power the data center development (Hao, 2025). This can have catastrophic effects on the acceleration of the climate and public health crisis. A current example of the impact on public health can be seen in Memphis, Tennessee, which houses one of Elon Musk’s largest data centers, Colossus. Given the amount of toxic air pollutants being released and the large consumption of clean drinking water, communities surrounding the center are experiencing a large decline in their health. Additionally, it has been found that ⅔ of AI data centers are most commonly built in water-scarce areas (Hao, 2025). In areas like Montevideo, Uruguay, where communities are forced to mix wastewater with drinking water to have enough for the community members, data centers are being built. These communities already have an increase in pregnancy complications and health crises, and the data center will only increase the risks faced by the community. AI data centers are increasing the public health crisis, and action needs to be taken to fight against the upcoming issues that are to appear.
Nevertheless, AI can also play a potentially large role in the healthcare system, which can help improve healthcare. AI does not aim to replace medical professionals, but rather to serve as a support tool that strengthens clinical decision-making. From the early detection of diseases to the personalization of therapies and remote patient monitoring, this technology is revolutionizing the way we understand and deliver healthcare.
Furthermore, in a global context where healthcare systems face challenges such as hospital overcrowding, population aging, and unequal access to medical services, Artificial Intelligence represents an opportunity to make healthcare more accessible, faster, and more precise.
Exploring the benefits of AI in medicine not only involves discussing technological innovation, but also improving the quality of life, prevention strategies, and the sustainability of future healthcare systems.

Conclusion
 

While this issue has not yet been formally addressed in legislation, the role of AI in the creation of works continues to be assessed.
Therefore, as we continue to analyze the challenges and solutions to AI-generated content creation, which is closely related to “brain rot,” we can consider the labels placed on social media platforms, such as META, which indicate whether the information we see was created entirely by or modified by artificial intelligence.
AI is a very polarizing topic that can impact a variety of individuals through its use and production. While benefits can be seen using AI in systems like healthcare, its data centers will also cause catastrophic damage to the environment and public health. In order to ensure both sides are considered, innovations must be considered to lessen the environmental impact so that AI can be used to better society. So, let me ask you this: Would it be beneficial to decrease the speed of production or reliance on generative AI to ensure that concerns regarding the system can be addressed?

References
 

Thoreau, H.D. (1897). Walden. Vol.2. Houghton, Mifflin. Digitized version by Google. http://books.google.com/booksid=9kYLAAAAIAAJ&oe=UTF-8

 

Navarro, R. (2025, 19 agosto). El caso Grok: cuando los prompts internos revelan los riesgos de la IA generativa. Artesano Digital. https://elartesano.digital/caso-grok-prompts-riesgos-ia/

 

Riquelme, R. (2023, 26 noviembre). Países aceleran regulación de Inteligencia Artificial y México es uno de ellos. El Economista. https://www.eleconomista.com.mx/tecnologia/Paises-aceleran-regulacion-de-Inteligencia-Artificial-y-Mexico-es-uno-de-ellos-20231125-0024.html

 

Euronews. (2023, 4 mayo). ¿Qué países están intentando regular la inteligencia artificial? Euronews. https://es.euronews.com/next/2023/05/04/que-paises-estan-intentando-regular-la-inteligencia-artificial

 

Signorelli, A. D. (2024, 1 septiembre). Historia de la inteligencia artificial en 10 fechas clave. WIRED. https://es.wired.com/articulos/historia-de-la-inteligencia-artificial-en-10-fechas-clave

 

Green, A., Tai, H., Noffsinger, J., Sachdeva, P., Bhan, A., & Sharma, R. (2024, September 17). Data centers and AI: How the energy sector can meet power demand | McKinsey. McKinsey’s Electrical Power & Natural Gas; Technology, Media & Telecommunications. https://www.mckinsey.com/industries/private-capital/our-insights/how-data-centers-and-the-energy-sector-can-sate-ais-hunger-for-power

 

Hao, K. (2025, June 29). Silicon Valley Insider EXPOSES Cult-Like AI Companies | Aaron Bastani Meets Karen Hao (A. Bastani, Interviewer). Novara Media. https://www.youtube.com/watch?v=8enXRDlWguU

 

Wroth, K. (2025, October 17). Data drain: The land and water impacts of the Ai Boom. Lincoln Institute of Land Policy. https://www.lincolninst.edu/publications/land-lines-magazine/articles/land-water-impacts-data-centers/

 

Data Center Solutions. Worley. (n.d.). https://www.worley.com/en/solutions/industries/low-carbon-energy/data-centers?utm_source=google&utm_medium=cpc&utm_campaign=23532375734&utm_content=190021674222&utm_term=data+centre+technologies&gad_source=1&gad_campaignid=23532375734&gbraid=0AAAAA–YDHC7UnmUVTGTmTePI88FweZJN&gclid=CjwKCAiAkbbMBhB2EiwANbxtbSqTJ2VOFsdk_kuURUpc6FRJfnBxdYQ08015MrEFXS_ts2uIiQfSQxoCDjUQAvD_BwE

 

IAE Magazine. (2026, January 1). How much electricity does a data center use? Complete 2025 analysis. https://iaeimagazine.org/electrical-fundamentals/how-much-electricity-does-a-data-center-use-complete-2025-analysis/

 

International Energy Agency. (2024). Energy demand from AI. https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai

 

Digitalisation World. (2025, May 30). Traditional data center workloads vs AI workloads. https://digitalisationworld.com/blog/58370/traditional-data-center-workloads-vs-ai-workloads

 

Stanford University. (2025, April 8). Thirsty for power and water: AI data centers in the West. https://andthewest.stanford.edu/2025/thirsty-for-power-and-water-ai-crunching-data-centers-sprout-across-the-west/

Climate Ambassadors Open Recruitment

 Climate Ambassadors is a UC Alianza MX initiative to build a network of students of the University of California and Mexican institutions and connect them with career opportunities within the new climate economy and society.

Climate Ambassadors work together on crossborder climate, sustainability and social justice challenges, building a network for impact.

Requirements

Pink circle.

Apply Here!

Group of students gathered around a table in a classroom, collaborating on a project with notebooks and papers.

Applications accepted until October 31

For more information climateambassadors@ucr.edu