Why AI Needs More Polymaths and Philosophers

With AI, something really interesting is happening; many things in fact. We have entered a new, wildly creative, surreal, and uncertain era. AGI or a super-intelligent, sentient AI might not be here yet but systems that create a very convincing illusion of intelligence are definitely here. And this just happens to align perfectly with my proclivities and experiences. In many ways, I have been preparing for this moment my whole life. Few people have an engineering degree, a law degree, and a philosophy degree, and have spent years studying and teaching Yog-Vedantic philosophy in India.

I have been interested in technology, philosophy, cosmology, and consciousness my whole life. It just took me a few decades to recognize the importance of all of that in my life, and its collective importance for the future of humanity. These life-long interests in technology, philosophy, consciousness, and ethics are finally converging into a new career focus: AI philosopher and ethicist. Thirty years ago I was an AI researcher at Clarkson University; so this is an exciting homecoming for me.

It is already apparent that AI will play an increasingly larger role in shaping the course of humanity; it will serve as a sort of collaborator or evolutionary engine alongside human innovation. Consequently, this engine needs to be designed thoughtfully and consciously, with open eyes and a deep understanding of our shared human values. Because it will reflect whatever mentality and disposition creates it. Companies working on AI technology need people who understand this, who understand technology, philosophy, and the law. AI development teams need left- and right-brain thinkers. After all, AI systems do, and will, necessarily reflect the mindset of their designers.

I have been an engineer. I know how they think. With all due respect to engineers the world over, we do not want our artificially intelligent collaborators to think like engineers, at least not exclusively (unless they are writing code of course). We want them to think like wise elders.

In short, I have decided to bring together all my professional (and personal) experiences into this new role as AI philosopher and ethicist. AI philosophy brings together my past experiences in engineering, law, product management, and even teaching yoga, as well as my current studies in philosophy, cosmology and consciousness at the California Institute of Integral Studies.

What Are AI Ethics and Philosophy?

AI philosophy is a superset of AI ethics and responsibility because it includes the practical, present-day concerns of AI ethics and safety such as reducing bias and ensuring alignment and transparency, while taking a broader and longer-term view into things like the possibility of AGI, the need for new approaches to AI like modeling abductive reasoning, and what AI means for consciousness, just to name a few areas that fall within the penumbra of this brand new field of AI philosophy.

AI ethics is more narrow and practical, addressing more immediate concerns. For our purposes, AI ethics is synonymous with AI safety, and part of AI responsibility.

Companies like Google and Meta have had AI ethicists for years already. At most companies, an AI ethicist has a broad remit. AI ethicists typically advocate for fairnesstransparencyalignment, and accountability. AI ethicists also collaborate with AI architects, engineers, and product managers to select and organize appropriate data sets and training methodologies. They help create dataset documentation. Not only do ethicists advocate for and ensure privacy-by-design in collaboration with privacy attorneys, but also fairness, alignment, and transparency by design.

In short, AI ethicists work to ensure that AI technologies reflect our shared values as a humanity. Understanding what those values are, and should be, requires a background in both philosophy and jurisprudence.

Why do for-profit companies invest the time and resources to have new AI technologies shepherded by ethicists, when it presents additional short-term costs to the company, and potential product delays? Not everyone does, as we have seen with recent layoffs. In March, Microsoft laid off the AI ethics teamresponsible for enforcing its AI responsibility principles internally, despite its being the company driving hardest and taking the most risks with its Bing + ChatGPT integration. That same month Twitch laid off its AI responsibility team. Toward the end of 2022, Elon Musk laid off the Twitter AI ethics teamas part of his sweeping, company-wide layoffs.

But the more forward-thinking companies understand that humane and ethical AI systems are a long-term good investment, for the company, its employees, its customers, and for our broader society.

For example, LinkedIn has built transparency into their AI systems by design since at least 2021. Google has been an AI-first company since 2016, and has had an AI ethics team since that time. Even OpenAI, despite how quickly they are moving, are thinking about AI safety. Of course OpenAI strongly believes that public use is also necessary to bolster AI safety through trial and error. In many ways, it is still the Wild West in terms of AI norms, standards, and regulations.

Of course self-governance is not sufficient and some sort of regulation will be required. It seems that government and industry in the United States are roughly aligned on what that might look like, perhaps less so in the EU, where regulators take a more consumer-focused stance on regulation, as we saw with the GDPR. Sam Altman recently said that the current draft of the EU AI Act would be “over-regulating.”

What I would like to do is broader than simply AI product ethics, although that is a large part of it. It is AI philosophy. I want to see AI systems trained on information and knowledge that favors wisdom as much as information, that portrays the world both accurately and aspirationally. AI technologies need to be forward-looking. Currently, because they are trained without much imagination on datasets that reflect our past, they are mostly retrospective.

This is a role that does not necessarily exist yet. It is arising alongside the AI renaissance we are living through. There are few job postings for AI ethicists, let alone AI philosophers. But I agree with Luca Rossi, that philosophy (and other humanities specialties) rather than coding are the jobs of the future.

Although AI philosopher is not a role that exists in most companies, it is absolutely crucial. In the same way that companies like Google have internet evangelists, they need AI philosophers. Humanity needs people working alongside AI engineers who are right-brained, emotionally mature, creative, and even spiritual; people who are living and moving as much from the heart as the mind.

My Background In Engineering, Law, and Product Management

I am naturally a philosophical thinker, always have been. I was a philosophical kid, reading The Tao te Ching when I was young thanks to my father, or getting lost in the strange world of Frank Herbert’s Dune books, a world where AI had been banned and human “computers” (mentats) were cultivated instead. But I was also a working class kid and studied engineering and computer science as much out of practicality as intellectual interest.

My first graduate work in the mid-1990s was in artificial intelligence. I was part of the AI lab at Clarkson University, writing a thesis on neural networks and the Chinese game of Go. Although I enjoyed the research and writing code in C++ and Lisp, I was already more interested in consciousness and the biggest questions in AI: how to create a general purpose AI. With my unsupportive advisor and relatively infertile second AI winter of the 1990s, I left the field to join the dot.com bubble.

After working for six years as a software engineer at several startups, I took some time off to contemplate what I wanted from a career. I wanted to make an impact somehow, to influence technology policy. Engineering was too myopic. I wanted to solve problems on a much larger scale. So I went to law school and focused intellectual property, jurisprudence, constitutional theory, and law & economics.

I spent the next decade working as a technology and intellectual property lawyer in Silicon Valley, advising on everything from open source to privacy. I was fortunate to work at places like Twitter and Google as a product counsel. Having a deep understanding of technology served me well in this role. I so enjoyed supporting product managers that I decided to get back to my technology roots and transition to product management, entering several hybrid product roles. I also advised Tech:NYC on technology policy during my time living in New York City.

Plus, not only have I worked as a product manager, my clients all the years I was product counsel were product managers. I am used to working with PMs and engineers, as well as lawyers, management, and marketing.

My clients and colleagues always described me as having “more integrity than anyone they work with.” Sometimes this hampered my career. But I always saw it as a long-term reputational investment.

As they say, law is applied philosophy. Lawyers are uniquely suited for applying ethics. We are required to pass an ethics exam. Despite how often lawyers make headlines for unethical conduct, there is a deep connection between law and ethics.

It seems that most ethicists working in AI today have graduate degrees in some variation of AI technology, rather than a humanities degree. It seems self-evident that these teams need to contain a diverse set of educational, cultural, and experiential backgrounds, including technologist philosophers.

When I complete my masters in philosophy, cosmology, and consciousness next year I plan to apply to PhD programs in the philosophy of AI at places like NYU and Stanford. My plan is to collaborate with industry as I proceed along a hybrid academic-industry path.

Why We Need AI Philosophers

As we saw with AI bias, product managers and engineers are not thinking enough about the bigger questions. It is obviously time to start doing that. AI is quickly becoming a collaborator alongside us. We want that collaborator to have the best interests of humanity in mind, and to reflect our greatest potential, not our past mistakes. This requires more than just Isaac Asimov’s Three Laws of Robotics.

AI models are getting more powerful and useful by the day, but standards and norms are barely keeping up, if at all. The IEEE has recommended that organizations developing AI designate someone inside the company with an ethics background as a “Chief Value Officer,” who would promulgate an internal code of conduct and empower employees to raise ethics concerns anonymously and without fear of negative repercussions. This is the kind of role I am talking about.

Important and Open Questions in AI and Machine Learning

As I relaunch my podcast and YouTube channel this month, and begin speaking more broadly about AI philosophy, I will be exploring a whole host of fascinating questions, including the following.

Questions of Consciousness

Current approaches to artificial intelligence, including especially machine learning models like ChatGPT, are essentially an attempt to model the structure of the brain (“neural networks”), existing within a broader assumption that intelligence, mind, and consciousness are epiphenomena of brain chemistry. So far there is no proof that this is the case. Without further evidence, brain activity is as likely correlation as it is causation.

This question of consciousness is known as the “hard problem of consciousness” in cognitive science, because it is difficult to see how physical processes could give rise to something as subjective, amorphous, dynamic, fluid, and even ethereal as consciousness.

If we are going to develop a deeper understanding of the nature of consciousness in connection with AI, we need to adopt an approach and an attitude that is simultaneously both more discerning and more open-minded than current approaches. Discerning in the sense that we need to think critically about the assumptions underlying the current approaches; open-minded because truly modeling and understanding the human mind will likely require more holistic and progressive thinking than we have seen to date in mainstream cognitive and computer science.

There is no indication that anything like consciousness as we know it will simply emerge inadvertently as a byproduct of machine learning systems, despite the many recent assertions to the contrary.

What It Means To Think and Feel as a Human

Questions of consciousness aside, if we are going to attempt to recreate human intelligence, we should understand at a reasonably deep level what it means to think and feel and be conscious. For example, what does it mean to have the experience of enjoying music, or a pleasant sunset, or a heartbreaking separation? What else besides mere information processing is going on there, and what aspects of that are essential?

Part of this inward examination is considering the three types of thinking that philosophers and scientists have identified. Only two of them have been implemented in any AI system: deduction (early AI such as expert systems) and induction (machine or deep learning). There is a third type of thinking that Charles Sanders Pierce called abduction, or abductive inference: the means by which we arrive at flashes of insight and those aha moments after spending substantial time with a problem. It is quintessential detective or inventor thinking where solutions seem to emerge out of nowhere, not by logic but in a flash. And it is a kind of thinking that nobody has attempted to emulate using computers, so far.

Understanding the Inner and Outer World

Going beyond pure reason, what else is necessary for us to create true intelligence? What is intelligence exactly? And is modeling or creating intelligence enough? We should be shooting for wisdom, not just intelligence.

Is it necessary for an AGI to have a complete model of the whole world? I think there is an argument that, for a machine to truly be conscious and to truly think well, and to be robust, it has to have an understanding of cause and effect and it has to have a mental model of the world and some interactive relationship in terms of a sensing kind of relationship with the world. Is machine learning (inductive reasoning that simply recognizes patterns in the world) sufficient to develop a robust mental model? And, for a machine to truly understand this model — to grok it as Heinlein would say–does it need to interact and sense? What about desires? Are programmed desires sufficient or are autogenous desires necessary? Can these arise from a sufficiently advanced AI system?

And what about emotions? Is it necessary to model emotions? Or are they a byproduct of intelligence? Perhaps there is more to thinking than a pure sort of Apollonian reason; given the richly challenging yet rewarding nature of existing in the cosmos as a conscious, autonomous entity, perhaps certain Dionysian modes necessarily come along for the ride.

The AGI Superintelligence and Its Implications

Nobody knows when someone will create an artificial general intelligence (AGI). It could be just around the corner, or it could be decades away. Either way, there is no denying that this will be an enormous inflection point in the history of humanity. It is never too early to prepare for this moment in terms of addressing its impact on the economy and our way of life.

This also raises the question of AI motivation: Can an AI develop its own intrinsic motivations or will they always be supplied by its creators? If they become intrinsic, how do they arise? From some sort of desire? In what ways will these desires be connected to the modes of interaction the AI has with the world? Will these AGIs experience pleasure? Have emotion?

You can see how quickly the concept of AGI brings up all these secondary questions around AI.

AI Is Not the Second Coming

A Technological Messiah?

As I said in a recent YouTube short, I think we are investing too much in the possibility of AI and discounting our human potential. We are treating AI like a messiah or savior figure, vesting all of our hopes and dreams in these systems. For example, Bill Gates recently said that AI is going to help us solve climate change, poverty, education, and health. And I think it can help us to solve these problems. But, again, it will be more of a collaboration, with AI augmenting human potential, not replacing it. As I argue above, because we are nowhere near replicating all the ways that humans reason, we need humans to solve these problems, as imperfect and begrimed and scruffy as we are.

If AI systems are going to talk to us and feed us information, that information must be not only be factual, but uplifting and aspirational as well. Expansive, liberating. What I have in mind is essentially the opposite of the worst-case AI scenario of misinformation, bias, and “hallucinations.”

As you can see, there are many unanswered questions in this still nascent field of artificial intelligence.

What’s In a Name?

‘What is artificial intelligence?’ And someone else said, ‘A poor choice of words in 1954.’ — From a Tweet quoted by sci-fi writer Ted Chiang

I also wonder whether “artificial intelligence” is an overly constraining name for what we are setting out to create. This term was coined in the middle of the 20th century, the heart of the age of reason at the dawn of the Information Age, when pure rationality was elevated above all other ways of being and knowing, such as feeling, empathizing, intuiting, and considering matters of the heart, when mere information was elevated above wisdom. Again, I think we should be creating AI systems that are more like wise eldersthan HAL9000, Data, or Marvin the paranoid android from The Hitchhiker’s Guide to the Galaxy.

By framing the endeavor as one of creating an artificial intelligence, I think we are unnecessarily constraining what we are creating. After all, branding is everything. But, of course, I realize the name is probably here to stay.

Conclusion

There are not nearly enough AI philosophers and ethicists working to shape the development of AI thoughtfully. Companies developing AI technology need to consider both immediate concerns and the broader and longer-term view of AI’s impact on humanity. AI technologies needs to align with human values, and to be made wiser.

In addition, we need a more discerning and open-minded approach to understanding the human mind, including a serious consideration of abductive reasoning, the potential role of desires and emotions in AI, and the importance of modeling the immense complexity of the world, of having an interactive relationship with it.

My background in AI research, engineering, product management, law, and philosophy make me perfectly suited to advise companies on these matters. Over the coming months and years, I want to advocate for a thoughtful and conscious approach to AI development, and the integration of philosophy and ethics in shaping present and future AI technologies.

My mission is to help others connect with their inner wisdom and create a more conscious world. AI philosophy is a crucial part of that mission.

I just relaunched my podcastYouTube channel, and email newsletter. Follow me there as I unpack and explore all of this.

Previous
Previous

Listening To Music As Spiritual Practice

Next
Next

Astrology, Free Will, and Creativity