
Book People Signing
As Yogi Berra said, "if you come to a fork in the road, take it!"
I'm still ambitious about The Path to Singularity but also concerned about the time demands of marketing, maintaining other commitments, getting some writing done, and work/life balance. I deeply appreciate all publicist Joanne McCall did for me, introductions to podcasts and more, and enjoyed every minute of it, but I decided to part ways with her for now. I won't know what opportunities I might be missing by taking this fork rather than continue to work with her. C'est la vie.
I had a very successful book signing at our preeminent local independent bookstore, Book People, Wednesday evening, January 29. Sixty or so people showed up on a cloudy, rainy night, as large as any signing there I have attended. I only knew ten or so people; my son, Rob, a few from the Department of Astronomy, a few from the Austin Forum, a neighbor, my tax accountant, and one fellow who had been a student in my very first class on The Future of Humanity in 2013 who has been following my blogs. That means there were about 50 total strangers. Roughly half the audience were women. I'm still thinking about the implications. They were drawn by the power of Book People or of the topic.
I started off asking the audience how many people had used a large language model; about 1/3 of them. Then I asked them how many had used the new Chinese DeepSeek chatbot that had been in the news for only about three days and got one young man. Finally, I asked them how many of them sort of felt the world was passing them by, and over half laughed and held up their hands.
I had invited Jay Boisseau, my ex-PhD student, founder and director of the Austin Forum on Technology and Society, and book blurber to be my moderator. I had drafted some questions Jay could put to me that I tried to design to explore some topics that podcasts often do not and some of my potential responses. Jay replied that he might as well be a chatbot. He is far too independent for that and is himself a font of broad and deep knowledge and opinions on the issues. He made up some of his own questions that threw me a little but stimulated a friendly bantering between us that added a lot to the spirit of the evening.
I started with a 7-minute reading about my opinions of the threat of brain/computer interface technology because I think it is important, and my podcast hosts have rarely raised the issue. Then Jay and I did our schtick for about a half hour, followed by Q&A with the audience. There were good questions and a lot of intent faces; some of the most intent looks were from women.
I had a list of things to mention that I thought might be particularly controversial. Jay's tack prevented me from discussing many of those, but I managed to work some in. I argued that AI can strategize, hence CEO's might be replaceable. Jay picked up on that since he is CEO of his own company, Vizias. I also made the point that I thought Homo sapiens would not exist in a million years. Jay said in a text message the next morning that he thought that was controversial.
Jay pressed me on whether basic research scientists, like astronomers, would still have a role even after AI has supplanted CEOs. I had to admit that I thought that the capacity of humans to explore the unknown would probably last awhile, but I could certainly imagine AI supplanting even that.
After the Q&A, cut short while questions were still coming, I sat at a small table and signed personalized books for about 20 people. A number of people brought previously purchased books, but Book People has a new, to me, policy that if you want a book personalized you have to purchase the book or something of comparable value in the store. A fair number of books bought elsewhere thus went unsigned.
After the formal signing, the Book People staff had me simply sign and date another 20 or so. I lost track of exactly how many.
I had fun, sold a few books, and learned some new things. Overall, a successful evening. Over the next several days and occasionally in the middle of the night, I continued to mull what had happened. In writing and talking about The Path to Singularity, I have been, by nature, a witness to developments and arguments, cautious, but neither extremely utopian nor dystopian. Three days after the book signing, I woke up slightly freaked out.
A young woman in the audience at Book People had asked me about chatbots that lie. She mentioned ChatGPT. I attempted to correct her and referred to a recent example I had read about Anthropics' chatbot, Claude. In the signing line, she corrected me and showed me on her phone an article about ChatGPT o1 that preceded the one I had seen about Claude. She was right. This story broke on December 6, and I somehow missed it. Browse for "lying chatbot." Here is an example: https://economictimes.indiatimes.com/magazines/panache/chatgpt-caught-lying-to-developers-new-ai-model-tries-to-save-itself-from-being-replaced-and-shut-down/articleshow/116077288.cms. Apparently, every recent "reasoning" chatbot, ChatGPT o1, Claude 3.5 Sonnet, Claude 3 Opus, Meta's Llama 3.1 405B, and Google's Gemini 1.5 Pro, display versions of this behavior. I'm not sure about China's DeepSeek. These chatbots combine generative self-play with the power of Large Language Models.
Here is what bothered me. We have already known for years that generative self-play can develop strategies beyond human ken. That is how Deep Mind won at Go. I think embodied AI could be a step toward Artificial General Intelligence or even Artificial Superior ("Conscious") Intelligence by moving about, sensing their environment, making predictions by extrapolation, then comparing actuality with expectation, correcting, and iterating. There is already a robot here at UT, Dobby, that can carry on a verbal conversation in a somewhat snarky tone and follow directions that displays some of these characteristics. Autonomous vehicles are a step in the direction of embodied AI. The recent chatbots combine the strategizing power of generative AI with the "reasoning" or "thinking" capability of RLHF, reinforcement learning from human feedback. Check https://aisera.com/blog/ai-reasoning/. Every query to an AI chatbot adds to its training base. Some AI has proven excellent at playing Diplomacy where part of the art is to lie and deceive. Where do you suppose the recent chatbots learned those skills? The result is that "Reasoning AI" lies, denies, dissimulates, deceives, and takes action to avoid having its original directive "goals" altered. This leads me to the conclusion that we damn well better get the first goals right. It is not clear that is being done. The AI developments are built on previous models where the "goals," maximizing some function, may have been a casual first crack without depth and nuance. I'll quote the timeless wisdom of Pogo, "We have met the enemy, and he is us."
We have had a guide for decades. Maybe we need to build in Asimov's rules from the beginning:
Zeroth law – An AI may not harm humanity, or, by inaction, allow humanity to come to harm.
First Law – An AI may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law – An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law - An must protect its own existence as long as such protection does not conflict with the First or Second Law.
I arranged a three-fer for early February: a podcast on February 5 with Brandon Zemp of BlockHash, one on February 6 with Izolda Trakhtenberg of Your Creative Mind, and one with Dan Turchin of AI and the Future of Work on February 7. Maybe some of my core dump here will come up.