BG & AI Post 1: The Bhagavad Gita and the Ethical Use of AI: A Path to Responsible Technology

कर्मण्येवाधिकारस्ते मा फलेषु कदाचन।
मा कर्मफलहेतुर्भूर्मा ते सङ्गोऽस्त्वकर्मणि॥
(Karmay-evādhikāras te mā phalehu kadāchana |
Mā karma-phala-hetur bhūr mā te sa
go ’stvakarmai ||)
“You have the right to work, but never to the fruit of work. Do not let the fruit of action be your motive, nor let your attachment be to inaction.”

Technology, Education, and My Journey

Ever since I arrived in the United States in 2009, I have been fascinated by the intersection of technology and education. As someone passionate about teaching and learning, I was particularly drawn to how technology could transform writing instruction. Over the years, I’ve seen how digital tools can empower students from diverse linguistic and cultural backgrounds, making education more inclusive and accessible. My PhD dissertation focused on Learning Management System (LMS) interface design, where I argued for involving students from different cultural and linguistic backgrounds in developing these platforms. The goal was simple: to create safe, inclusive, and user-friendly spaces for learning.

Writing in the Age of Digital Technologies

As a writing professor, I’ve designed and taught courses that explore what it means to be a writer in the age of digital technologies. But with the rise of AI tools like ChatGPT, I’ve also felt the growing anxiety in academia. When ChatGPT (second version) launched in November 2022, the fear of rampant cheating and academic dishonesty became a pressing concern. This anxiety, however, is not new. Scholars like Plato were skeptical of writing itself in classical Greece, fearing it would erode memory and critical thinking. The printing press, word processors, and now generative AI have all faced similar skepticism. Yet, each of these technologies has also brought transformative benefits.

A Personal Turning Point

For a long time, I observed the debates surrounding ChatGPT misuse by students but hesitated to address it in my writing courses. That changed when I saw my own 8th-grade son using ChatGPT for his school assignments in November 2024. This made me deeply concerned. I wondered whether my son was using ChatGPT as a shortcut rather than as a tool for building knowledge. Witnessing my son’s use of ChatGPT was both shocking and enlightening. As a writing professor who has always emphasized the relationship between writing and technology, I realized I had been avoiding a crucial conversation.

Why hadn’t I embraced AI as a tool for experimentation and discussion in my classes? I began to wonder how many of my students were using AI tools like ChatGPT and submitting answers it prepared, especially since I had a zero-tolerance policy for AI use and no reliable verification mechanism to detect whether their work was AI-generated. I realized that ignoring or denying AI’s existence was not a viable solution. Instead, it was more prudent to confront it head-on.

Incorporating AI into My Teaching

With this in mind, I decided to incorporate AI into my writing courses, making AI itself a topic of discussion and encouraging students to experiment with it starting in January 2025. This, I believed, was the best way to develop critical perspectives and teach students to use AI responsibly, ethically, and creatively. Since there is no reliable plagiarism detection method for AI-generated writing, I concluded that teaching self-discipline and self-restraint (principles emphasized in the Bhagavad Gita) was the most effective way to promote responsible AI use.

Revising My Writing Syllabi

In December 2024, I decided to act. I read several books and articles on how AI tools like ChatGPT could be integrated into writing classrooms. I revised my syllabus to make AI a central topic of discussion and a collaborative tool for writing activities. To my delight, my students have embraced this approach, especially since many professors still enforce a zero-tolerance policy for AI use. But as I delved deeper into the ethical implications of AI, I realized that technical solutions alone (like AI detection tools) are not enough to address the challenges we face. This is where the timeless wisdom of the Bhagavad Gita comes in.

Why the Bhagavad Gita?

My decision to integrate the principles of the Bhagavad Gita into my teaching and research on AI is deeply personal. For the past two years, I have been studying the Gita, and its teachings have profoundly shaped my understanding of life, work, and education. The Gita’s emphasis on self-regulation, ethical action, and detachment resonates with the challenges we face in the age of AI.

Moreover, the Gita is part of my cultural and spiritual heritage. My great-grandfathers were scholars of the Vedas, and our family was known as “Vedaas” in our village. I grew up as “Veda ko Raju,” surrounded by the wisdom of ancient scriptures. I remember my grandfather healing villagers with mantras from the Vedas. While I cannot say for certain whether these mantras cured their ailments, I do know that they brought comfort and hope. Many parents used to bring their children suffering from asthma for treatment to my grandfather. This connection to the Vedas and the Gita inspired me to name my blog “Ved Vani Community Literacy Forum” before changing it to “Toronto Realty and Rhetoric.”

What’s Next?

In the fall of 2025 (August), I will teach a course on writing and technology for the Science, Technology, and Math Living Learning Community at my university. This course will integrate the principles of the Bhagavad Gita to explore the ethical use of AI in writing and education. I am also working on journal articles on how the Gita’s teachings can guide the responsible use of AI.

The Bhagavad Gita reminds us that true progress comes from ethical action and self-awareness. As we navigate the challenges and opportunities of AI, let us strive to use this powerful technology with wisdom, responsibility, and compassion. As the Gita says:

“Yoga is the journey of the self, through the self, to the self.” (Chapter 6, Verse 5)

Let this journey guide us in creating a future where technology serves humanity, not the other way around.

Historical Resistance to New Technologies in Education

A History of Fear and Innovation in Education

Throughout history, the introduction of new technologies in education has been met with resistance and fear. From Plato’s skepticism of writing to modern anxieties about artificial intelligence, each technological advancement has sparked concerns about its potential to disrupt traditional learning, erode skills, and undermine academic integrity. These fears, while often rooted in genuine concerns, are frequently shaped by uncritical perspectives and a lack of understanding of how technology can be integrated responsibly. This list explores key moments in the history of educational technology, highlighting recurring patterns of fear and resistance, and argues that such anxieties are often exaggerated or misplaced.

Let’s have a look at what different (writing) technologies were developed over time and how people reacted against them. 

Historical Resistance to New Technologies in Education

  • Plato (3400 BCE): Decried the invention of writing, fearing it would erode memory and critical thinking.
  • Printing Press (1440, England): The invention of the printing press sparked fear among many, who believed that widespread access to printed books and materials would make literacy accessible to everyone, threatening the exclusivity of knowledge.
  • 1801: Chalkboards: Initially met with skepticism about their effectiveness in teaching.

Technological Advancements and Academic Fears

  • 1969: Word Processors: Revolutionized how people write and edit texts, but initially seen as tools that would make writing too easy, reducing students’ effort.
  • 1970s: Calculators: Educators feared students would lose basic mathematical skills.
  • 1980s: Cheating Crisis: The rise of paper mills like schoolsbcks.com and helpmeet.com raised concerns about academic integrity.
  • 1990s: Plagiarism Crisis: The internet made it easier to copy and paste content, leading to widespread plagiarism concerns.
  • 1990s: Computers in Classrooms: The initiative to wire U.S. school classrooms with computers was championed by Bill Clinton and Al Gore as part of their efforts to modernize education through technology. Writing professors initially believed computers would “do magic” and solve all writing challenges.
  • 1980s: Spell Checkers: Introduced in the 1980s, they were criticized for potentially undermining students’ spelling skills.
  • 2009: Grammar Checkers (e.g., Grammarly): Grammarly, launched in 2009, was feared to weaken students’ ability to self-edit and learn grammar rules.
  • 2007: Smartphones and Texting: The introduction of smartphones, particularly the iPhone in 2007, and the rise of texting led to fears that students’ writing skills would be destroyed. Critics argued this, but McWhorter, in his work Texting Kills, highlighted how such fears have circulated throughout history. He traces these concerns from the classical Greek period to modern times, arguing that while these fears have always existed, they are not necessarily true in many cases.

Digital Tools and Modern Concerns

  • 2017: Paraphrasing Tools (e.g., Quillbot): Raised concerns about students bypassing original thought and critical writing.
  • 2020–2021: COVID-19 Pandemic: The shift to remote learning during the pandemic led to a surge in plagiarism and academic dishonesty.
  • December 6, 2022: “The College Essay is Dead” by Stephen Marche: Argued that AI tools like ChatGPT would render traditional essays obsolete.
  • November 2022: ChatGPT (Version 2): Fears of rampant plagiarism and the end of critical thinking in academia emerged with the release of ChatGPT.

Uncritical Fears and Overreactions

  • Fear of Automation (2010s–Present): Concerns that AI tools would replace human creativity and critical thinking. For example, many predicted that ChatGPT would “destroy everything,” claiming machines would take over jobs in film, music, and other industries, leaving humans unemployed.
  • Overreliance on Detection Tools (2020s): Belief that AI plagiarism detectors alone could solve academic dishonesty.
  • Ignoring Ethical Use (2020s): Lack of focus on teaching students how to use AI responsibly and ethically.
  • Nostalgia for Traditional Methods (Ongoing): Resistance to change based on idealized views of past educational practices.

ACADEMIA AND ITS ATTEMPT TO REMAIN UP-TO-DATE

The most important point is that for academia to remain relevant and up-to-date, it must embrace and incorporate these technologies. Resistance to change only hinders progress, while adaptation ensures that education evolves alongside technological advancements. If academia does not incorporate the technological advancements taking place in society, it risks becoming obsolete, and people may lose faith in it. It is essential for academia to integrate new technologies, as well as social, cultural, linguistic, and other movements happening in society. This incorporation of not only new technologies and trends but also innovative ways of delivering knowledge as pedagogical tools helps keep academia up-to-date and relevant.

Technological Evolution and Our Perspectives

I know you may not regard writing as a technology now at a time of iPhone 16 ProMax and other technologies, but writing itself was once the most revolutionary technology at a certain point in history. According to Denis Baron (1999), every time a new technology is introduced, people are skeptical of it. Initially, it is often expensive and inaccessible. Over time, however, it becomes more widely available, and people begin to trust it. As its production increases, it becomes cheaper and more widely adopted. If you examine the technologies we use today for specific purposes, you’ll find that many were originally designed for entirely different reasons. For example, consider the calculator, the computer, or even Facebook. These tools were initially created for specific functions, but as time passed, they began to be used for entirely different purposes. Facebook, for instance, was originally a note-sharing tool designed by Mark Zuckerberg when he was a student at Harvard. Today, Facebook is used for almost everything you can imagine. Similarly, the pencil was another revolutionary writing tool at one point in history. We often have a tendency to forget old technologies once we become accustomed to newer ones. However, it’s important to remember that those old technologies never truly die. Instead, they evolve. Over time, these older technologies are incorporated into newer ones. If old technologies were not integrated into new ones, people would struggle to find meaning in the new technologies and would not be able to use them effectively.

Conclusion

The history of educational technology reveals a recurring pattern: every new innovation is met with fear and resistance. From the printing press to ChatGPT, critics have warned of the dangers these tools pose to learning, creativity, and academic integrity. However, these fears are often rooted in uncritical perspectives that fail to consider how technology can be harnessed responsibly. For instance, the fear that ChatGPT would “destroy everything” and render humans obsolete in creative industries has proven exaggerated. Instead of resisting change, educators and society must focus on developing critical perspectives, ethical frameworks, and adaptive strategies to integrate technology in ways that enhance, rather than undermine, learning. As history shows, fears are inevitable, but they should not dictate our response to progress.