icon caret-left icon caret-right instagram pinterest linkedin facebook x goodreads bluesky threads tiktok question-circle facebook circle twitter circle linkedin circle instagram circle goodreads circle pinterest circle

Playing Author

That Was the Week That Was

The last week of April was packed with various activities.

 

Chloé Hummel, my publicist at Prometheus/Global Pequot, emailed that she was moving on, as ambitious young women in the book business are wont to do. I enjoyed working with her and wish her luck. We were just starting a project to try to promote bulk sales to companies. I waited a decent interval to see if Prometheus would provide a new publicist, then wrote my editor. No response. After a month, I wrote to my only other contact, a fellow in productions. He did not know the situation but linked in a marketing director. It has been another couple of weeks. No response from anyone. My book is six months old, there is a new season, I'm being dropped.

 

I got a wonderful note from Neil DeGrasse Tyson saying that I had a standing invitation to be on his podcast, Star Talk, if I were sometime in New York. I replied that I would get myself there if we could line up a time. I'm awaiting that development.

 

Before I retired, I was a member of The University of Texas at Austin Academy of Distinguished Teachers. I still attend their weekly conversational lunches when I can. The Academy sponsors a program called Reading Roundup wherein faculty meet with incoming freshmen just before the start of their first term to discuss a book chosen by the Academy member. The seeds of The Path to Singularity were planted in such a get together, as described in the preface. I stopped doing Reading Roundup when I retired, but when I got the invitation to Reading Roundup this year, I realized that it would be great fun to talk about The Path to Singularity, so I signed up to do so in the fall. I'll report on that in a future post.

 

In an interesting surprise, I received an email from Juan Serinyà, Chief Technology Officer of Tory Technologies, a Houston company that writes control room management software, primarily for the petroleum business, with clients in the US, Brazil, Columbia and elsewhere. Juan has Catalonian roots, was trained in Venezuela, and has been in US for 30 years. He was in Austin for a conference and ran across The Path to Singularity in our independent bookstore, Book People, a remnant of my doing a book signing there. Juan said he was interested in the topics of my book and wondered if I might be willing to give a keynote address at his client meeting in August. Hey! Is the Pope Italian? Despite the prospect of Houston in August, I replied with an enthusiastic yes. He asked about my fee. I have never done such a thing but recognizing that while Neil DeGrasse Tyson is a friend of mine, I'm no Neil DeGrasse Tyson, I named a number that seemed neither embarrassingly small, nor overambitious. Juan said, "we can handle that." I should have asked for more. We've signed a contract that spells out what Juan would like to hear me talk about and that is exactly what I would like to say. They will pay my expenses and agreed to cover the cost of a rental car and the time of my son, Rob, to drive me, the equivalent of an Uber. I'm shy of driving long distances by myself these days. They will set up a table where I can sell and sign books. There will be 50 clients, so I'm trying to think how many books I'd need. I'm exploring getting a Houston bookstore to provide the books and handle the sales and romancing the notion of setting up a book signing in an independent bookstore in Bastrop which is on the way to Houston from Austin.  I'm really looking forward to it, including gently raising climate issues to a bunch of oil people. I'll do a blog on that when it happens.

 

I went to a talk by Dr. Aubra Anthony, a Senior Fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. She spoke on "responsible AI," asking "responsible on whose terms?" She stressed the cultural differences around the world that complicate the topic, pointing out that AI LLM models developed in the Global North might not be totally appropriate in the Global South.

 

I attended a Zoom call book discussion sponsored by the Austin Forum on Technology and Society. The book was Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future by Hamilton Mann. This book also addressed cultural differences with regard to AI and social issues, arguing that AI Integrity involves culture and is context dependent and that given the complexity of both machines and people, perfection is hard to reach.  The author advised accepting that society will lag technical status and to be practical about what is most doable in policy and regulation, given that perfect will not be possible. The goal should be minimizing the severity of the technology/society dislocation. He called for avoiding systems that can manipulate and deceive. To that I say, too late! Recent LLMs lie and deceive. I advocated the Golden Rule for AI I invented for The Path to Singularity, "Do unto AI as you would have it do unto you." The author asked how to prevent malicious use of AI but did not answer the question directly. A small technical quibble. The author claimed that the global market size for AI is expected to be $2,575.16 billion by 2032; 6 significant figures? Really?

 

I read a longish online essay by Dario Amodei, the founder and CEO of the AI company Anthropic that produced the LLM AI, Claude. The essay covers many of the same topics I do in The Path to Singularity, but with interesting, complementary insights. You might find the second section on neuroscience and mind especially interesting. I also started reading a long amusing, cartoon illustrated presentation on why Elon Musk created his brain/computer interface company, Neurolink. I bogged down despite being entertained and even a little educated. I need to get back to it.

 

I joined an online MIT-sponsored webinar with Sherry Turkle. She discussed the issues with having chatbot friends. She regards this as an existential threat, arguing that children developing their own sense of empathy should not use chatbots that have no true inner life. People have an inner life, chatbots don't. Among her admonitions and declarations: Don't make products that pretend to be a person. Require/request engineers to write a memoir to connect them to their own inner life. No good therapist asks a patient, are you happier after our interaction as chatbots do. Criticize metrics of the use of chatbots. Effect on civil society – terrible, terrible, terrible. To make people angry and keep people with their own kind; could not be a worse algorithm. Guardrails – companies invite people to invent their own AI. Pretend empathy is not empathy. Chatbots don't have a body, don't have pain, don't fear death. Chatbots are alien. Not human. The woman has opinions. I share many of them.

 

What a week that was!

Be the first to comment

Hat Trick

Poodle-chewed book

 

A friend of mine, Elaine Oran, bought a copy of The Path to Singularity. Her poodle, Cooper, got to it first. I think he enjoyed it.

 

I pulled off a hat trick in early February, three back-to-back podcasts on Wednesday, Thursday, and Friday (February 5, 6, 7), plus a reception Thursday afternoon.

  

The reception was an annual event hosted by the university provost to celebrate faculty authors. There were about 50 authors, although I think fewer than that attended. My book was on the left rear of an array of four tables, the only one from the College of Natural Sciences. Mine was also the only one accompanied by the little business cards that my agent Regina Ryan suggested I make up, which I set out when I arrived. Thanks to the cards, I think sold a few books. I met the provost, chatted with the vice president for research, astronomy colleague Dan Jaffe, and a half dozen other authors, one of whom was an Hispanic woman, K. J. Sanchez, a playright. She has written a play about a female astronaut who is stranded on the Moon. I'll try to attend a performance of that in the spring. I also chatted with Bret Anthony Johnston, a writer at the UT Michener Center, whose latest novel We Burn in Daylight is based on the 1993 federal siege of the Branch Davidian compound in Waco. He took one of my cards.

 

Between preparing – drafting answers to pre-posed question – and following up (which took some time), the three podcasts were a bit taxing, but they all went well. The hosts enjoyed the conversations, as did I. All had scheduled about 30 minutes, and we instead ran for 50 – 65 minutes; all asked to have me back. The themes are, of course, all similar, but each host had interesting variations, and I learned some interesting things. 

 

The first on Wednesday was Brandon Zemp on the BlockHash podcast arranged by my Prometheus publicist, Chloé Hummel; https://tinyurl.com/4fwbj9vr. This was done with StreamYard, a program I had not used before. I shrank the whole screen and moved it up near my camera, removed my glasses, put the mic front and center. I'm getting the hang of this. Brandon is a young American working in Mérida, Columbia. He'd read the whole book, and we touched on jobs, AI ethics, strategizing, lying chatbots, brain computer interfaces, designer babies, and the space program. We talked about AGI, and I worked in the notion that things are changing so fast that we are entering a new phase of humanity when we cannot adapt to our new technology. I also brought in the notion of strategizing, lying, deceitful chatbots. We talked a bit about brain computer interfaces, and I warned against developing a hive mind that would lose the organic power of independent minds thinking independently. This might be an issue for AI as well, it occurred to me, if they all link together. We talked about designer babies and seeking the cure to aging and possible downsides that need to be carefully thought about. I made my case that there will be no Homo sapiens in a million years, or much less. Brandon got that argument. I repeatedly called for "strategic speculation" to anticipate issues. I meant to say to "avoid unintended consequences," but forgot to. We talked about Musk's goals of cities on Mars. I said I was sure we would become an interplanetary species (Homo europa? Homo vacuo?), but I was less sure about being an interstellar species because of the limits of the speed of light. I wanted to talk about whether AI can hold patents, own companies, and vote, but we didn't get to that. See below. Brandon threatened to ask about my favorite chapter but didn't. I was ready to bluster that was like asking me to name my favorite child, but I would have picked Chapter 2 on the nature of exponential growth; it is so fundamental. Brandon had an interesting story about humanoid robots. He noted that when people were seen mistreating these robots, other empathetic people got very upset on the robot's behalf. Our tendency to bond with our machines (Squeeze Me Elmo) is an interesting related issue. In this case, I got to invoke the slogan I had invented in the book, "do unto AI as you would have it do unto you." I hadn't known quite what I meant by that, but Brandon gave me a nice AI ethics context, flipping the normal script of AI alignment. Be nice to AI.

 

Thursday was Izolda Trakhtenberg of Your Creative Mind, again arranged by Chloé. Izolda had an interesting story of her thinking of purchasing an item, but telling no one, and then finding ads for the item appearing in her feed. A very effective predictive algorithm? I'm still thinking about that. I told her Zemp's story of the maltreated robot and empathetic response, and we talked about two-way AI ethics. She'll post the podcast in late March or early April.

 

Friday was Dan Turchin of AI and the Future of Work, a hold-over arranged by publicist Joanne McCall. Dan claims to have an audience of a million people, not just total over 300 episodes, but per episode. I'll believe that when we sell 1%, 10,000 books. Dan requested that I rate and review an old podcast of his. He says it will improve the discoverability of my episode. I tried to do this but got tangled up over access to where and how to post comments on Apple Podcast and Spotify. Dan advocated a notion that "employment is dead," that rather than top-down rigid management structure, jobs will be more voluntary, subject to "snapshot voting" and open to "wisdom of the crowd" procedures. I'm still thinking about that. It's apparently a Millennial thing. It reminds me of the approaches that Minister of Digital Affairs Audrey Tang brought to the functioning of democracy in Taiwan. Dan was also sure that we would see "AI citizenship" before we saw people on Mars. I voted the other way. Dan said in a later email that he is not in favor of AI citizenship and is himself opposed to technology that blurs the human-machine boundary, but that he knows a "small cadre of ethicists and attorneys who are advocating for bot rights." I'd made a speculative extrapolation to AI voting in The Path to Singularity. We have not heard the end of this issue. Dan says he will post his podcast in about eight weeks. 

 

I've now done thirteen podcasts, a radio program, and a book signing. Whew! Not sure I've sold many books.

 

 

 

 

Be the first to comment