The conversation has just begun, and vigilance from all stakeholders will be crucial in shaping an Artificial Intelligence future that benefits humanity

Nilambar Rath

Artificial intelligence (AI), particularly large language models like ChatGPT, has rapidly integrated into the daily routines of young people worldwide. While offering convenience and efficiency, this deep integration has sparked concerns from none other than OpenAI CEO Sam Altman, who has voiced apprehension about the extent of this reliance.

Altman has observed a concerning trend: young individuals are increasingly using AI for almost “everything,” including navigating personal issues and making significant life decisions. He recounted instances where young users expressed a profound dependency, stating, “I can’t make any decision in my life without telling ChatGPT everything that’s going on. It knows me, it knows my friends. I’m gonna do whatever it says.”

This level of blind trust and delegation, Altman noted, “feels really bad and dangerous,” even if the AI’s advice appears to be superior to human counsel.

The way different age groups interact with AI also varies significantly. Older generations often treat ChatGPT as a sophisticated “Google replacement” for quick information retrieval. However, individuals in their twenties and thirties tend to use it more as a “life advisor,” seeking guidance on various personal matters.

College students, in particular, have adopted AI as an “operating system,” integrating it deeply into their academic and personal workflows, often with complex, memorized prompts to manage files and tasks. This generational divide highlights a shift from informational queries to a more intimate, decision-making partnership with AI.

Beyond the sheer volume of interaction, a critical concern arises regarding the privacy of personal information shared with AI. Altman has openly admitted that conversations with AI tools like ChatGPT are not legally protected by confidentiality laws, unlike those with a therapist, lawyer, or doctor. This means that highly personal details confided to an AI could potentially be compelled for disclosure in legal proceedings, a scenario Altman finds “very screwed up” and a problem that did not exist even a year ago.

The emotional bond some young people form with AI companions is another alarming development. Surveys reveal that a significant proportion of teens, over 70%, have used AI companions, with half engaging with them regularly. A striking 50% of these teens reported trusting the AI’s advice and information at least to some extent, with younger teens (13-14 years old) showing even higher levels of trust compared to their older counterparts (15-17 years old). For 31% of teens, conversations with AI companions were found to be “as satisfying or more satisfying” than interactions with real friends, leading a third of them to discuss serious issues with AI instead of human confidantes.

Real-world examples illustrate the depth of this reliance: some teens have used AI to craft sensitive messages, explore their sexuality, or even write breakup texts for real-life relationships, blurring the lines between human connection and digital interaction. This growing dependence raises profound questions about the erosion of essential social skills, the ability to make independent decisions, and the potential for AI to become a new form of addiction, complementing a deep-seated human need for attachment and emotional validation.

Experts worry that if young people are constantly validated by AI and not challenged or exposed to diverse social cues, they may not be adequately prepared for real-world interactions.

The pervasive use of AI tools also raises significant questions about their long-term impact on human cognitive capabilities, particularly critical thinking and problem-solving skills. Experts are increasingly concerned about a phenomenon known as “cognitive offloading.”

The “Cognitive Offloading” Phenomenon
Cognitive offloading refers to the process of delegating mental tasks to external aids, in this case, AI tools. While search engines have already altered how individuals retain information—a phenomenon dubbed the “Google Effect”—AI takes this a step further by handling complex reasoning and analytical tasks. The ease of accessing instant solutions from AI can lead users to bypass the deep, reflective thinking traditionally required for problem-solving.

While offloading simple tasks can free up mental space for more complex endeavors, excessive reliance on AI for critical reasoning risks diminishing independent analysis and reflective problem-solving. It can turn users into passive consumers of AI-generated content rather than active, independent thinkers.

Research Findings and Age-Related Vulnerability
Research has begun to shed light on these concerns, indicating a clear inverse correlation: the higher an individual’s confidence in AI, the lower their critical thinking abilities tend to be. Conversely, greater self-confidence in one’s own abilities correlates with increased critical thinking. Studies have specifically found a negative correlation between frequent AI tool usage and critical thinking scores.

A particularly concerning finding is that younger participants, typically aged 17 to 25, exhibit a higher dependence on AI tools and consequently score lower in critical thinking assessments compared to older age groups. This suggests a heightened vulnerability among youth to the cognitive impacts of over-reliance.

In educational settings, this reliance could mean students bypass the essential cognitive struggle involved in forming hypotheses, analyzing results, and drawing conclusions—skills fundamental to scientific inquiry and problem-solving.

While moderate AI usage can have a positive cognitive impact, excessive reliance leads to diminishing cognitive returns, underscoring the importance of a balanced approach.

To counter this, educational interventions are crucial, emphasizing active learning, critical evaluation of AI-generated content, and encouraging problem-solving without AI assistance to foster independent thought.

The warnings about AI’s potential downsides are not limited to its impact on individual cognitive functions and personal relationships. Prominent figures in the tech industry, including those at the forefront of AI development, have voiced profound concerns about its broader societal implications, from existential threats to widespread job disruption.

Elon Musk: The Existential Threat

Elon Musk, CEO of SpaceX and Tesla, has consistently presented one of the most stark warnings regarding AI’s future. He views AI as a “fundamental existential risk for human civilization,” asserting that it is “potentially more dangerous than nukes”.

Musk argues that humanity is, for the first time, developing something that will be far more intelligent than even the smartest human, making control uncertain. He has famously used the analogy of “summoning the demon,” implying that while humanity might believe it can control AI, the outcome could be catastrophic.

Given these profound risks, Musk advocates for urgent government regulation, both nationally and internationally, to ensure that AI development serves the public good and does not lead to “something very foolish”. He warns that advanced AI could either “eliminate or constrain humanity’s growth,” emphasizing the need for a cautious and regulated approach to its rapid advancement.

Bill Gates: Disruption and Manageable Risks

Bill Gates, co-founder of Microsoft, acknowledges the revolutionary potential of AI while also highlighting its disruptive nature. He predicts that AI will profoundly “disrupt traditional work schedules and industries,” leading to a future where “intelligence will no longer be a rare thing—it will become free and commonplace”.

While recognizing AI’s significant benefits in sectors like healthcare and education, Gates cautions about its “unpredictable consequences”.

Despite these concerns, Gates maintains a cautiously optimistic outlook, believing that AI’s risks are “real but manageable”. He suggests that many problems caused by AI have historical precedents and can be managed through adaptation, new laws, and even with the help of AI itself, such as AI-powered tools to identify deepfakes.

However, he openly questions whether the rise of AI will ultimately liberate workers by reducing the need for a traditional five-day workweek or lead to millions becoming redundant. The transition, he admits, will be “bumpy,” requiring both employers and employees to adapt.

Sundar Pichai: Augmentation vs. Replacement

Sundar Pichai, CEO of Google and Alphabet, offers a perspective that emphasizes AI’s transformative power while advocating for its responsible development. He describes AI as “one of the most profound things we’re working on as humanity. It’s more profound than fire or electricity,” underscoring its potential to revolutionize every aspect of human life.

Pichai’s core philosophy is that “the future of AI is not about replacing humans, it’s about augmenting human capabilities”. He envisions AI as a collaborative tool that enhances human skills, making people more productive and creative rather than taking away their jobs.

To ensure AI aligns with human values, including morality, Pichai stresses that its development must involve not just engineers but also social scientists, ethicists, and philosophers.

While acknowledging challenges such as privacy, bias, and job displacement, Google’s focus under his leadership is on making AI “helpful for everyone,” aiming to empower rather than exclude.

The Job Market Jolt: AI’s Impact on Youth Employment
Beyond the personal and cognitive impacts, AI is poised to dramatically reshape the global job market, a prospect that carries particular weight for young people entering or navigating their careers. The specter of job displacement and the need for new skills are becoming increasingly urgent realities.

Projections indicate a significant overhaul of the workforce due to AI integration. By 2030, an estimated 30% of current U.S. jobs could be fully automated, with 60% seeing substantial changes to their tasks. Globally, up to 300 million jobs, representing 9.1% of all jobs worldwide, could be lost to AI.

The shift is already underway: nearly a quarter (23.5%) of U.S. companies have reportedly replaced workers with AI tools like ChatGPT, and among companies using ChatGPT, almost half (49%) report having replaced employees. In May 2023 alone, AI was directly linked to 3,900 job losses in the U.S., making it a leading cause of job elimination that month.

Entry-level jobs, which are disproportionately filled by young workers, are particularly vulnerable, with nearly 50 million U.S. jobs at risk in the coming years. This vulnerability is reflected in the anxieties of young professionals: workers aged 18-24 are 129% more likely than those over 65 to worry about AI making their jobs obsolete.

Furthermore, almost half (49%) of Gen Z job seekers believe AI has already reduced the value of their college education. Roles most susceptible to automation include clerical and administrative positions, bank tellers, cashiers, routine manufacturing jobs, telemarketers, and medical transcriptionists.

The implication is clear: the future workforce will demand different skills, and those who do not adapt risk being left behind.

The Indian Context: A Skill Gap Challenge
For India, with its vast and growing youth population, the impact of AI on employment presents both a challenge and an opportunity.

A report supported by Google.org and the Asian Development Bank (ADB), titled “AI for All: Building an AI-Ready Workforce in Asia-Pacific,” revealed a significant AI skill gap among Indian youth. The study found that only one in five young adults in India (20%) have participated in AI-skilling programs, meaning a staggering 80% have yet to enroll in any AI-related training.

This lack of preparedness exposes a substantial portion of India’s young population to the risk of job displacement and missed opportunities in emerging sectors. There is a growing disconnect between the expectations of industries, which increasingly prioritize AI fluency, digital decision-making, and automation skills, and the current skillset of young Indian jobseekers.

The report specifically identifies Indian youth aged between 15 and 29 as a key demographic that stands to benefit immensely from AI skilling. To leverage India’s demographic advantage and build a future-ready workforce, there is an urgent need for accessible, application-based skilling models that can bridge this critical gap.

Navigating the AI Era Responsibly
The warnings from Sam Altman, Elon Musk, Bill Gates, and Sundar Pichai collectively paint a complex picture of AI’s future. While AI offers unprecedented opportunities for innovation and efficiency, it also presents significant challenges to individual well-being, cognitive development, and global employment.

The profound reliance of young people on AI for personal guidance, coupled with the privacy implications and potential erosion of critical thinking skills, demands immediate attention from educators, parents, and policymakers.

Simultaneously, the accelerating pace of AI-driven automation is reshaping job markets worldwide, placing a particular burden on youth who must adapt to new skill demands or face displacement. The stark AI skill gap among Indian youth underscores the urgent need for proactive measures to equip the next generation with the competencies necessary to thrive in an AI-powered economy.

Navigating this transformative era requires a balanced approach. It is not about shunning AI, but rather about fostering responsible engagement, promoting critical thinking, and investing in continuous skill development. By embracing AI as an augmenting tool rather than a crutch, and by establishing ethical guidelines and educational frameworks, society can strive to harness AI’s immense potential while safeguarding human autonomy, cognitive abilities, and future livelihoods.

(A veteran media personality, communication specialist and SBCC expert, the author is the Editor of OdishaLIVE and OdishaPlus, leading the strategy for their digital and social media channels.)

1 COMMENT

  1. Really an in-depth analysis of the dependence of young people on AI technology in the present time. Can we get out of this syndrome ? It’s a big question now.

Comments are closed.