GENERATIVE ARTIFICIAL INTELLIGENCE: BETWEEN ENHANCEMENT AND COGNITIVE OFFLOADING

INTELIGENCIA ARTIFICIAL GENERATIVA: ENTRE LA POTENCIACIÓN Y LA EXTERNALIZACIÓN COGNITIVA

Alejandro Espeso-García

GENERATIVE ARTIFICIAL INTELLIGENCE: BETWEEN ENHANCEMENT AND COGNITIVE OFFLOADING

Cultura, Ciencia y Deporte, vol. 20, no. 66, 2025, 10.12800/ccd.v20i66.2698

Universidad Católica San Antonio de Murcia

Alejandro Espeso-García a

Universidad Católica de Murcia, España


Abstract: The history of technology can be understood largely as the history of externalizing human capacities. In this sense, the emergence of Generative Artificial Intelligence (GenAI) marks an inflection point by delegating, for the first time, executive and reasoning functions. A fundamental dilemma thus arises: is GenAI an amplifier that democratizes performance or a mechanism of dependence that leads to cognitive offloading? Drawing on recent literature, this article examines its educational, cultural, and social implications, highlighting the need to understand it both as a technical tool and as a phenomenon that transforms the relationship between mind and knowledge. In a future where humans and artificial intelligence become intertwined, the ultimate question is whether this symbiosis will open a new chapter in intellectual evolution or, instead, trigger an involution that undermines the mind’s sovereignty over its own processes.

Keywords: Cognitive atrophy, digital literacy, educational technologies, technological dependency, neuroplasticity.

Resumen: La historia de la tecnología puede entenderse como la historia de la externalización de las capacidades humanas. En este sentido, la irrupción de la Inteligencia Artificial Generativa (IAG) marca un punto de inflexión al delegar, por primera vez, funciones ejecutivas y de razonamiento. Surge así un dilema fundamental: ¿es la IAG un amplificador que democratiza el desempeño o un mecanismo de dependencia que conduce a la atrofia cognitiva? A partir de la literatura reciente, se examinan sus implicaciones educativas, culturales y sociales, destacando la necesidad de comprenderla tanto como herramienta técnica como fenómeno que transforma la relación entre la mente y el conocimiento. En un futuro donde lo humano y lo artificial se entrelazan, la cuestión final es si esta simbiosis abrirá un nuevo capítulo en la evolución intelectual o, por el contrario, derivará en una involución que amenace la soberanía de la mente sobre sus propios procesos.

Palabras clave: Atrofia cognitiva, alfabetización digital, tecnologías educativas, dependencia tecnológica, neuroplasticidad.

From Tool to Agent: The Externalization of Cognition

Human history can be understood largely as the externalization of human capacities through tools. Each technological innovation has served as a means of transferring physical or intellectual functions outside the individual (Donald, 2002; Malafouris, 2016). Thus, the advent of writing served to avoid memorizing large volumes of information, a practice Socrates criticized on the grounds that it would weaken memory (Gill, 2020; Roochnik, 2024). Later, the calculator transformed arithmetic competence by automating numerical processing, relegating mental calculation to a secondary role (Boyle & Farreras, 2015). Over time, technological progress has centered on delegating specific mental functions in order to free resources for more complex tasks (Clark, 2008; Lee et al., 2025; Risko & Gilbert, 2016). Until now, the consensus was clear: tools assumed the heavy or repetitive task, and in exchange, the human mind remained available for more creative or demanding activities.

However, the emergence of Generative Artificial Intelligence (GenAI) marks an inflection point and constitutes a radical shift that disrupts this balance. Unlike tools limited to storage or algorithmic execution, Large Language Models (LLMs) such as ChatGPT (OpenAI), Claude (Anthropic), Grok (X Corp.) and Gemini (Google DeepMind) do not merely perform tasks; they also emulate reasoning processes. They are capable of synthesizing, arguing, creating, and solving complex problems with a fluency that often exceeds typical human performance (Naveed et al., 2023; Saleh et al., 2025; Shahzad et al., 2025).

This phenomenon poses an evolutionary challenge to educational systems regarding the development of students’ capacities (Holmes et al., 2022; Jose et al., 2025). On the one hand, a scenario of hybrid intelligence is emerging, in which GenAI acts as an amplifier for creativity and productivity, facilitating access to the zone of proximal development that would otherwise remain out of reach (Doshi & Hauser, 2024; Giannakos et al., 2024). On the other hand, growing scientific evidence warns of the risk of widespread deskilling. The continuous externalization of higher-order cognitive processes, or cognitive offloading, could lead to an accumulated cognitive debt, resulting in the loss of intellectual agency, the weakening of long-term memory, and a structural dependence on algorithms even for basic thinking tasks (Gerlich, 2025; Risko & Gilbert, 2016; Tian & Zhang, 2025).

Hence the need for a critical analysis of the current scientific literature considering the speed with which GenAI tools have been incorporated into academic and professional settings, often without a clear understanding of their implications for learning (Holmes et al., 2022; Kosmyna et al., 2025). Within this framework, the present article aims to examine the evolution of cognitive externalization through GenAI and its impact on learning processes. The question is not whether GenAI should be integrated into education, but how its interaction with humans reconfigures the neural circuits that sustain complex thought (Kosmyna et al., 2025; Lee et al., 2025).

In this context, a paradox emerges: is GenAI the ultimate tool capable of democratizing human genius and tearing down the barriers to knowledge, or does it represent, by contrast, an externalization process so extensive that it threatens to atrophy what defines intelligence and, ultimately, the human condition?

Benefits of GenAI: The Democratization of Competence

Just as the Industrial Revolution mechanized physical labor, the era of GenAI promises to industrialize cognitive effort (Giannakos et al., 2024; Lee et al., 2025). Recent research indicates that this technology can accelerate workflows and redistribute tasks, increasing execution speed and improving the quality of outcomes (Dell’Acqua et al., 2023; Saleh et al., 2025).

This shift represents a new paradigm. If the Internet and smartphones democratized access to information, LLMs are democratizing intellectual competence (Doshi & Hauser, 2024; Peláez-Sánchez et al., 2024). Their capacity to synthesize and organize knowledge offers the potential to reduce inequalities in both academic and professional performance (Lee et al., 2025; Naveed et al., 2023; Shahzad et al., 2025).

In this context, the primary impact of the technology is not the amplification of excellence, but rather the narrowing of the performance gap between individuals. Evidence suggests that GenAI acts as a modulator in educational and professional environments (Giannakos et al., 2024; Lee et al., 2025; MacCallum et al., 2024; Pallant et al., 2025).

Studies on creativity and assisted writing demonstrate that GenAI particularly benefits those starting from lower baseline levels of proficiency (Gavira-Durón et al., 2025; Kosmyna et al., 2025). By lowering the barrier to producing complex texts or solving logical problems, GenAI can operate as a ubiquitous tutor (Shahzad et al., 2025). Students previously marginalized by difficulties in spelling, grammar, or structural organization can now demonstrate their conceptual competence (Doshi & Hauser, 2024; Kosmyna et al., 2025; Shahzad et al., 2025).

In this sense, the tool does not democratize knowledge per se, but rather the capacity for execution. It allows the focus to shift toward the quality of ideas rather than the mechanics of their presentation (Han, 2024). This is not a matter of externalizing thought, but of filtering out the surrounding noise, allowing thought to flow with greater clarity and depth. The risk, however, is to confuse access to information with genuine acquisition of knowledge (Holmes et al., 2022).

Biological Foundations of Learning

There is a common misconception that the brain operates like a computer, as if downloading a file were equivalent to storing it in memory (Brette, 2022). However, neuroscience demonstrates that the human brain far more closely resembles a living ecosystem than a mass storage device (Fields, 2005; Marzola et al., 2023). Learning is not a passive repository; it depends on physical changes in neural structure through processes such as long-term potentiation (LTP), axonal myelination, and synaptogenesis (Mount & Monje, 2017; Nicoll, 2017).

A fundamental principle was formulated by Donald Hebb (1949): “Cells that fire together, wire together.” When two neurons are activated repeatedly and simultaneously, their connection strengthens (Ostroff, 2023). This process accelerates the transmission of nerve impulses and ensures that the most frequently used circuits become preferred pathways for thought and memory (Flower et al., 2025; Morris, 1999). However, these structural changes require active cognitive effort. As Daniel Willingham (2009, 2021) notes, “memory is the residue of thought”. If technology removes the need to think, it also removes the opportunity to learn.

Brain Development

The brain matures through specific phases of heightened neuroplasticity, during which experience plays a fundamental role in shaping its architecture (Fleming & McDermott, 2024). In childhood and adolescence, an abundance of synaptic connections combines with a high sensitivity to environmental stimuli, consolidating functions such as memory, attention, and abstract reasoning (Wang et al., 2025).

During these critical stages, exposure to GenAI can produce contrasting effects. On the one hand, it offers opportunities to personalize learning and introduce desirable difficulties, tasks that require enough effort to trigger learning and can enhance neuroplasticity (Harris et al., 2024). On the other hand, cognitive offloading, the act of delegating tasks like planning or information synthesis to an external tool, can reduce the practice of executive functions, weakening their consolidation and producing anaccumulated cognitive debt (Sadegh-Zadeh et al., 2024; Topolnyk et al., 2025).

In adolescence, the brain refines its connections through synaptic pruning, a process of eliminating lessfrequently used connections to optimize the efficiency of neural networks (Selemon, 2013; Wang et al., 2025). Consequently, if executive functions are consistently externalized to GenAI, there is a risk that the brain may categorize these neural pathways as expendable, potentially leading to their attrition (Delevich et al., 2018).

Effort and Reward

Learning is supported by a cycle of effort and reward, in which solving a difficult problem generates a prediction error that triggers the release of dopamine, reinforcing perseverance and the habit of confronting new challenges (Michely et al., 2020; Wang et al., 2020). Consequently, dopamine acts as a reinforcer, modulating the relationship between the cost of effort and the value of the reward (Duncan & Shohamy, 2024).

Based on these mechanisms, GenAI has the potential to disrupt this balance by delivering outcomes, such as a finished text or a solved problem, without the prior cost of effort. This creates a state of illusory efficiency, where the brain receives a strong reward signal for nearly zero energy expenditure (Chong et al., 2017). In the short term, this produces immediate satisfaction that may condition the brain to prefer shortcuts. Over the long term, it undermines intrinsic motivation, causing cognitive effort to be perceived as something to be avoided (Schultz, 2016; Weinstein, 2023).

The Paradox of Assisted Learning

Neuroscience demonstrates that plasticity and maturation depend on active effort and cognitive friction (Bjork & Bjork, 2020). While GenAI can serve as a support that frees up resources for abstract thought, it can also erode essential circuits if it systematically replaces executive functions (Kosmyna et al., 2025; Kirschner & Hendrick, 2024).

In this regard, GenAI-assisted learning is not neutral; its impact depends on how it is integrated into educational practice (Holmes et al., 2022). If GenAI is conceived as scaffolding, it can facilitate the consolidation of mental schemas and enhance understanding (Han, 2024). Conversely, if it used as a shortcut, it threatens to weaken memory, self-regulation, and intrinsic motivation (Pan et al., 2023).

The Value of Desirable Difficulties

By flattening the learning curve, there is a risk of depriving students of the cognitive gymnasium necessary to develop critical thinking. This phenomenon relates to the desirable difficulties described by Robert and Elisabeth Bjork (2020). According to these researchers, deep learning is inherently uncomfortable: it requires proposing provisional answers, making mistakes, tolerating uncertainty, and retrieving information from memory through active effort (Bjork & Bjork, 2020).

If GenAI provides the solution before students formulate a hypothesis, the learning process may be interrupted (Kosmyna et al., 2025). The problem is not that the machine knows too much, but rather that its continuous use can lead to atrophy through disuse, weakening the ability to think critically without external support (Gerlich, 2025; Risko & Gilbert, 2016). Hence the efficiency paradox: by reducing the friction of effort, tools that facilitate academic work may make learning more superficial, depriving it of the discomfort necessary to consolidate knowledge (Kosmyna et al., 2025; Risko & Gilbert, 2016).

This saving of effort also affects the self‑perception of one’s competence. Fernandes et al. (2025) describe a variant of the Dunning-Kruger effect in GenAI-assisted environments. Instead of an individual's incompetence limiting the outcome, the algorithm performs the complex work, and the user interprets the success as their own (Pallant et al., 2025). Thus an illusion of mastery arises: a metacognitive failure in which the student believes they understand a subject simply because GenAI enabled them to produce a coherent text, thereby deactivating the self‑regulatory mechanisms needed to consolidate new knowledge (Jwair, 2025; Matueny & Nyamai, 2025).

Compounding this is a troubling inverse correlation: the greater the trust in GenAI, the less critical thinking is applied (Matueny & Nyamai, 2025). The fluency and confident tone of GenAI responses induce users to accept information under the premise that “if it reads well, it must be true”, thereby reducing critical scrutiny (Lee et al., 2025). Consequently, the risk is not merely that GenAI will hallucinate facts, but that it will generate a hallucination of competence in the user, who avoids cognitive effort by confusing the ability to produce an answer with their trueunderstanding.

The Challenge of Generative AI Literacy

Faced with the risk of disuse atrophy, the response from institutions cannot be rejection: it must involve the pedagogical management of this cognitive offloading (Gerlich, 2025). Integrating GenAI requires a shift from an education focused on producing answers to one oriented toward knowledge verification and practical application (MacCallum et al., 2024). In an environment where generating content incurs virtually zero cost, it is essential to develop a literacy that goes beyond instrumental use (Han, 2024) and teaches students to distinguish between instances where technology acts as a legitimate support and those where it serves as a substitute that impoverishes reasoning (MacCallum et al., 2024).

To consolidate cognitive sovereignty, clear frameworks must be established. UNESCO proposes a dual approach: to transform the student into a citizen capable of demystifying technological magic (UNESCO, 2024a) and to professionalize the teacher as a designer of learning experiences (UNESCO, 2024b). This literacy is not achieved through isolated theory, but through practical friction and progressive scaffolding (MacCallum et al., 2024).Constructivist models such as Scaffolded AI Literacy (SAIL) advocate for a progression from basic comprehension to critical evaluation, integrating ethical and socio-emotional impacts (Palmquist et al., 2025). The goal is to ensure that technology functions as a resource for empowerment rather than replacement (MacCallum et al., 2024; Soto-Sanfiel et al., 2025). It is not enough to simply use the tool; students must understand its ethical biases and be challenged to design their own systems to solve real-world problems, thereby avoiding the illusion of machine authority.

Therefore, it is necessary to move beyond the human-in-the-loop model, where the user acts merely as a supervisor, and advance toward a human-in-charge model that restores critical control and final authority to the individual (Jose et al., 2025). Without a solid foundation of knowledge established in long-term memory that allows auditing and correction of the algorithm, the student becomes not a pilot assisted by technology, but a blind passenger in an autonomous vehicle.

The Future of Work: Risks and Opportunities

The implications of GenAI transcend education, extending into the labor market and the configuration of professions. The World Economic Forum anticipates that, from 2025 onward, a major transformation will occur, altering both the distribution of employment and the manner in which individuals acquire and exercise their competencies.

The Specter of Deskilling

Within this horizon, the specter of deskillinglooms large, understood as the progressive atrophy of professional skills resulting from automation (Crowston & Bolici, 2025). In this process, the erosion of intermediate roles, often referred to as the hollowed-out middle, emerges as the most vulnerable segment (Acemoglu & Restrepo, 2018). If GenAI assumes routine tasks, future entry-level workers lose the indispensable practice opportunities needed to become experts (Ferdman, 2025; Tyson & Zysman, 2022).

Compounding this is the paradox of automation: the more competent GenAI-based systems become, the fewer opportunities people have to practice relevant skills (Daware, 2025; Nilsson, 2025). Consequently, when the algorithm fails or hallucinates, the professional lacks the necessary experience to correct it, producing fragile, long-term dependent systems (Gerlich, 2025; Shukla et al., 2025).

Toward Hybrid Intelligence

The alternative to the risk of deprofessionalization is the construction of a hybrid intelligence model. In this framework, collaboration between people and GenAI systems is not conceived as a substitute, but as a complementary system (Passerini et al., 2024). Hybrid intelligence seeks to articulate an ecosystem in which human creativity and algorithmic power are combined (Dell'Acqua et al., 2023; Fragiadakis et al., 2024). Evidence confirms that this collaboration, where the human guides and refines GenAI outputs, generates solutions that are more innovative and applicable than those produced by either the algorithm or the human acting in isolation (Labedzki, 2025).

Skills of the Future

The World Economic Forum’s Future of Jobs Report 2025 highlights a profound shift in the professional world: competencies based on mechanical repetition or encyclopedic memory are losing relevance, while metacognitive and social skills are moving to the center of human labor.

According to the report, analytical thinking and innovation are the most sought-after competencies. These are joined by leadership, social influence, resilience, flexibility, and agility, all necessary for adapting to changing environments (World Economic Forum, 2025). In a scenario where GenAI amplifies production, the capacity to orient, inspire, and coordinate human teams becomes a decisive factor (Chiu, 2025; Fernandes et al., 2025). At the same time, active learning and learning to learn strategies become even more essential: it is no longer sufficient to accumulate knowledge; one must know how to generate and apply it (Blasco & Charisi, 2024; Gerlich, 2025).

In this context, AI literacy should not be understood as an isolated technical skill but as the foundation that supports all others. It is necessary to understand how GenAI systems function, how to evaluate their results, and how to use them critically. Thus, the future of work should not be defined by the replacement of the human, but by the ability to articulate a symbiosis between technology and metacognition, where GenAI acts as a catalyst rather than a substitute for complex thought.

Conclusions

The availableevidence suggests that GenAI is not a neutral instrument; its impact on cognition depends entirely on how it is integrated into educational, professional, and social processes. Used as a convenient shortcut, it may erode memory, attention, and critical thinking. Conversely, when conceived as productive friction and metacognitive scaffolding, it has the potential to become the most transformative tool since the printing press.

History shows that the human mind adapts to the tools it employs. The challenge, therefore, lies not in competing with the algorithm in terms of speed or storage capacity, but in preserving what the machine lacks: intentionality, ethical judgment, and the capacity to make sense of complexity.

Human cognition is already hybrid and will become irreversibly hybrid in the immediate future. The decisive question is not whether the mind will merge with GenAI, but whether this hybridization will result in evolution or involution. Everything will depend on one condition: that the human mind retains its sovereignty as a cognitive agent, that it defines the ends, judges the means, exercises final judgment, and maintains authority. The outcome will be decided by whether humanity acts as a pilot guidedby technology or as a passenger confusing the illusion of understanding with the act of thinking.

Ethics Committee Statement

No aplica.

Conflict of Interest Statement

No aplica.

Funding

Esta investigación no recibió financiación.

Authors’ Contribution

No aplica.

Data Availability Statement

No aplica.

Sugerencia y exclusión de revisores

No aplica.

References

Acemoglu, D., & Restrepo, P. (2018). Low-skill and high-skill automation. Journal of human capital, 12(2), 204–232. https://doi.org/10.1086/697242

Anthropic (2025). Claude [Large language model]. Anthropic. https://claude.ai/

Bjork, R. A., & Bjork, E. L. (2020). Desirable difficulties in theory and practice. Journal of Applied Research in Memory and Cognition, 9(4), 475–479. https://doi.org/10.1016/j.jarmac.2020.09.003

Blasco, A., & Charisi, V. (2024). AI Chatbots in K-12 Education: An Experimental Study of Socratic vs. Non-Socratic Approaches and the Role of Step-by-Step Reasoning. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.5040921

Boyle, R. W., & Farreras, I. G. (2015). The effect of calculator use on college students’ mathematical performance. International journal of research in education and science, 1(2), 95.

Brette, R. (2022). Brains as computers: Metaphor, analogy, theory or fact? Frontiers in ecology and evolution, 10(878729). https://doi.org/10.3389/fevo.2022.878729

Chiu, T. K. F. (2025). AI literacy and competency: definitions, frameworks, development and future research directions. Interactive Learning Environments, 33(5), 3225–3229. https://doi.org/10.1080/10494820.2025.2514372

Chong, T. T. J., Apps, M., Giehl, K., Sillence, A., Grima, L. L., & Husain, M. (2017). Neurocomputational mechanisms underlying subjective valuation of effort costs. PLoS Biology, 15(2), e1002598. https://doi.org/10.1371/journal.pbio.1002598

Clark, A. (2008). Supersizing the mind: Embodiment, action, and cognitive extension. Oxford University Press.

Crowston, K., & Bolici, F. (2025). Deskilling and upskilling with AI systems. Information research, 30, 1009–1023. https://doi.org/10.47989/ir30iconf47143

Daware, N. (2025). The De-Skilling Dilemma: A Critical Analysis of AI´s Impact on Human Potential. Vidyabharati International Interdisciplinary Research Journal.

Delevich, K., Thomas, A. W., & Wilbrecht, L. (2018). Adolescence and “late blooming” synapses of the prefrontal cortex. Cold Spring Harbor Symposia on Quantitative Biology, 83, 37–43. https://doi.org/10.1101/sqb.2018.83.037507

Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4573321

Donald, M. (2002). A mind so rare: The evolution of human consciousness. WW Norton.

Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances, 10(28), eadn5290. https://doi.org/10.1126/sciadv.adn5290

Duncan, K., & Shohamy, D. (2024). Dopamine and learning. En The Oxford Handbook of Human Memory, Two Volume Pack (pp. 689–710). Oxford University Press.

Ferdman, A. (2025). AI deskilling is a structural problem. AI & Society. https://doi.org/10.1007/s00146-025-02686-z

Fernandes, D., Villa, S., Nicholls, S., Haavisto, O., Buschek, D., Schmidt, A., Kosch, T., Shen, C., & Welsch, R. (2025). AI makes you smarter but none the wiser: The disconnect between performance and metacognition. Computers in Human Behavior, 175(108779), 108779. https://doi.org/10.1016/j.chb.2025.108779

Fields, R. D. (2005). Myelination: an overlooked mechanism of synaptic plasticity? The Neuroscientist: A Review Journal Bringing Neurobiology, Neurology and Psychiatry, 11(6), 528–531. https://doi.org/10.1177/1073858405282304

Fleming, L. L., & McDermott, T. J. (2024). Cognitive control and neural activity during human development: Evidence for synaptic pruning. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 44(26), e0373242024. https://doi.org/10.1523/JNEUROSCI.0373-24.2024

Flower, G., Vorthmann, S., Fulton, D., & Hamilton, N. B. (2025). Plasticity of myelination. Advances in Neurobiology, 43, 181–204. https://doi.org/10.1007/978-3-031-87919-7_8

Fragiadakis, G., Diou, C., Kousiouris, G., & Nikolaidou, M. (2024). Evaluating Human-AI Collaboration: A review and methodological framework. En arXiv. https://doi.org/10.48550/ARXIV.2407.19098

Gavira-Durón, N., Jiménez Preciado, A. L., Alonso-Rivera, A., & Ramírez-Culebro, C. M. (2025). The role of generative AI in transforming educational practices. Education and New Developments. https://doi.org/10.36315/2025v2end027

Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies (Basel, Switzerland), 15(1), 6. https://doi.org/10.3390/soc15010006

Giannakos, M., Azevedo, R., Brusilovsky, P., Cukurova, M., Dimitriadis, Y., Hernandez-Leo, D., Järvelä, S., Mavrikis, M., & Rienties, B. (2024). The promise and challenges of generative AI in education. Behaviour & Information Technology, 1–27. https://doi.org/10.1080/0144929x.2024.2394886

Gill, M. L. (2020). Socrates’ critique of writing in Plato’s phaedrus. En Wisdom, Love, and Friendship in Ancient Greek Philosophy (pp. 159–174). De Gruyter.

Google DeepMind (2025). Gemini [Large language model]. Google. https://deepmind.google/

Han, Y. (2024). Commentary: Generative artificial intelligence empowers educational reform: current status, issues, and prospects. Frontiers in education, 9. https://doi.org/10.3389/feduc.2024.1445169

Harris, K. M., Kuwajima, M., Flores, J. C., & Zito, K. (2024). Synapse-specific structural plasticity that protects and refines local circuits during LTP and LTD. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 379(1906), 20230224. https://doi.org/10.1098/rstb.2023.0224

Hebb, D. O. (1949). The Organization of Behavior. Wiley.

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504–526. https://doi.org/10.1007/s40593-021-00239-1

Jose, B., Cherian, J., Verghis, A. M., Varghise, S. M., S, M., & Joseph, S. (2025). The cognitive paradox of AI in education: between enhancement and erosion. Frontiers in Psychology, 16, 1550621. https://doi.org/10.3389/fpsyg.2025.1550621

Jwair, A. A. B. (2025). Generative artificial intelligence in higher education: Students’ journey through opportunities, challenges, and the horizons of academic transformation. Cogent Education, 12(1). https://doi.org/10.1080/2331186x.2025.2589495

Kirschner, P., & Hendrick, C. (2024). How Learning Happens. Seminal Works in Educational Psychology and What They Mean in Practice. Routledge.

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. En arXiv. http://arxiv.org/abs/2506.08872

Labedzki, R. (2025). Human-AI collaboration in Hybrid Multi-Agent Systems. International Journal of Electronics and Telecommunications, 1–9. https://doi.org/10.24425/ijet.2025.155474

Lee, H. P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–22.

MacCallum, K., Parsons, D., & Mohaghegh, M. (2024). The Scaffolded AI Literacy (SAIL) Framework for Education: Preparing learners at all levels to engage constructively with Artificial Intelligence. He Rourou, 23. https://doi.org/10.54474/herourou.v1i1.10835

Malafouris, L. (2016). How things shape the mind: A theory of material engagement. MIT Press.

Marzola, P., Melzer, T., Pavesi, E., Gil-Mohapel, J., & Brocardo, P. S. (2023). Exploring the role of neuroplasticity in development, aging, and neurodegeneration. Brain Sciences, 13(12), 1610. https://doi.org/10.3390/brainsci13121610

Matueny, D. R. M., & Nyamai, D. J. J. (2025). Illusion of competence and skill degradation in artificial intelligence dependency among users. International Journal of Research and Scientific Innovation, 12(5), 1725–1738.

Michely, J., Viswanathan, S., Hauser, T. U., Delker, L., Dolan, R. J., & Grefkes, C. (2020). The role of dopamine in dynamic effort-reward integration. Neuropsychopharmacology: Official Publication of the American College of Neuropsychopharmacology, 45(9), 1448–1453. https://doi.org/10.1038/s41386-020-0669-0

Morris, R. G. (1999). D. O. Hebb: The organization of behavior, Wiley: New York; 1949. Brain Research Bulletin, 50(5–6), 437. https://doi.org/10.1016/s0361-9230(99)00182-3

Mount, C. W., & Monje, M. (2017). Wrapped to adapt: Experience-dependent myelination. Neuron, 95(4), 743–756. https://doi.org/10.1016/j.neuron.2017.07.009

Naveed, H., Khan, A. U., Qiu, S., Saqib, M., Anwar, S., Usman, M., Akhtar, N., Barnes, N., & Mian, A. (2023). A comprehensive overview of large Language Models. En arXiv. https://doi.org/10.48550/ARXIV.2307.06435

Nicoll, R. A. (2017). A brief history of long-term potentiation. Neuron, 93(2), 281–290. https://doi.org/10.1016/j.neuron.2016.12.015

Nilsson, C. (2025). The artificial intelligence (AI) competence paradox: how AI reshapes clinical expertise. Transforming Government People Process and Policy. https://doi.org/10.1108/tg-02-2025-0048

OpenAI (2025). ChatGPT [Large language model]. OpenAI. https://chat.openai.com/

Ostroff, L. (2023). Ltp and structural plasticity. IBRO Neuroscience Reports, 15, S37–S38. https://doi.org/10.1016/j.ibneur.2023.08.2125

Pallant, J. L., Blijlevens, J., Campbell, A., & Jopp, R. (2025). Mastering knowledge: the impact of generative AI on student learning outcomes. Studies in Higher Education, 1–22. https://doi.org/10.1080/03075079.2025.2487570

Palmquist, A., Sigurdardottir, H. D. I., & Myhre, H. (2025). Exploring interfaces and implications for integrating social-emotional competencies into AI literacy for education: a narrative review. Journal of Computers in Education. https://doi.org/10.1007/s40692-025-00354-1

Pan, W., Lu, J., Wu, L., Kou, J., & Lei, Y. (2023). Expending effort may share neural responses with reward and evokes high subjective satisfaction. Biological Psychology, 177(108480), 108480. https://doi.org/10.1016/j.biopsycho.2022.108480

Passerini, A., Gema, A., Minervini, P., Sayin, B., & Tentori, K. (2024). Fostering effective hybrid human-LLM reasoning and decision making. Frontiers in Artificial Intelligence, 7, 1464690. https://doi.org/10.3389/frai.2024.1464690

Peláez-Sánchez, I. C., Velarde-Camaqui, D., & Glasserman-Morales, L. D. (2024). The impact of large language models on higher education: exploring the connection between AI and Education 4.0. Frontiers in education, 9. https://doi.org/10.3389/feduc.2024.1392091

Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002

Roochnik, D. (2024). Socrates’ critique of writing. Society, 61(6), 700–705. https://doi.org/10.1007/s12115-024-00968-8

Sadegh-Zadeh, S.A., Bahrami, M., Soleimani, O., & Ahmadi, S. (2024). Neural reshaping: the plasticity of human brain and artificial intelligence in the learning process. American Journal of Neurodegenerative Disease, 13(5), 34–48. https://doi.org/10.62347/NHKD7661

Saleh, Y., Abu Talib, M., Nasir, Q., & Dakalbab, F. (2025). Evaluating large language models: a systematic review of efficiency, applications, and future directions. Frontiers in Computer Science, 7(1523699). https://doi.org/10.3389/fcomp.2025.1523699

Schultz, W. (2016). Dopamine reward prediction error coding. Dialogues in Clinical Neuroscience, 18(1), 23–32. https://doi.org/10.31887/dcns.2016.18.1/wschultz

Selemon, L. D. (2013). A role for synaptic plasticity in the adolescent development of executive function. Translational Psychiatry, 3(3), e238. https://doi.org/10.1038/tp.2013.7

Shahzad, T., Mazhar, T., Tariq, M. U., Ahmad, W., Ouahada, K., & Hamam, H. (2025). A comprehensive review of large language models: issues and solutions in learning environments. Discover Sustainability, 6(1). https://doi.org/10.1007/s43621-025-00815-8

Shukla, P., Bui, P., Levy, S. S., Kowalski, M., Baigelenov, A., & Parsons, P. (2025). De-skilling, cognitive offloading, and misplaced responsibilities: Potential ironies of AI-assisted design. En arXiv. https://doi.org/10.48550/ARXIV.2503.03924

Soto-Sanfiel, M. T., Angulo-Brunet, A., & Lutz, C. (2025). The scale of artificial intelligence literacy for all (SAIL4ALL): assessing knowledge of artificial intelligence in all adult populations. Humanities & Social Sciences Communications, 12(1). https://doi.org/10.1057/s41599-025-05978-3

Tian, J., & Zhang, R. (2025). Learners’ AI dependence and critical thinking: The psychological mechanism of fatigue and the social buffering role of AI literacy. Acta Psychologica, 260(105725), 105725. https://doi.org/10.1016/j.actpsy.2025.105725

Topolnyk, Y., Gurevych, R., Debenko, I., Klochok, O., Cherniakova, Z., Yarova, A., & Maksymchuk, B. (2025). The impact of digital technologies and AI on adult learning: From digital literacy to neuroplasticity. Brain: broad research in artificial intelligence and neuroscience, 16(2), 148–155. https://doi.org/10.70594/brain/16.2/11

Tyson, L. D., & Zysman, J. (2022). Automation, AI & work. Daedalus, 151(2), 256–271. https://doi.org/10.1162/daed_a_01914

UNESCO (2024a). AI competency framework for students. https://doi.org/10.54675/jkjb9835

UNESCO (2024b). AI competency framework for teachers. https://doi.org/10.54675/zjte2084

Wang, A. R., Groome, A., Taniguchi, L., Eshel, N., & Bentzley, B. S. (2020). The role of dopamine in reward-related behavior: shining new light on an old debate. Journal of Neurophysiology, 124(2), 309–311. https://doi.org/10.1152/jn.00323.2020

Wang, H., Xu, X., Yang, Z., & Zhang, T. (2025). Alterations of synaptic plasticity and brain oscillation are associated with autophagy induced synaptic pruning during adolescence. Cognitive Neurodynamics, 19(1), 2. https://doi.org/10.1007/s11571-024-10185-y

Weinstein, A. M. (2023). Reward, motivation and brain imaging in human healthy participants - A narrative review. Frontiers in Behavioral Neuroscience, 17, 1123733. https://doi.org/10.3389/fnbeh.2023.1123733

Willingham, D. T. (2009). Why Don’t Students Like School? Jossey-Bass.

Willingham, D. T. (2021). Why don’t students like school? A cognitive scientist answers questions about how the mind works and what it means for the classroom (2a ed.). Jossey-Bass.

World Economic Forum (2025). The Future of Jobs Report 2025.

X Corp. (2025). Grok [Large language model]. X. https://x.ai/

Author notes

Autor para la correspondencia: Alejandro Espeso-García, aespeso@ucam.edu

Additional information

Título Abreviado: Generative AI: Enhancement & Cognitive Offloading

How to cite this article: Espeso-García, A. (2025). Generative Artificial Intelligence: Between Enhancement and Cognitive Offloading. Cultura, Ciencia y Deporte, 20(66), 2698. https://doi.org/10.12800/ccd.v20i66.2698

Secciones
APA
ISO 690-2
Harvard
Contexto
Descargar
Todas