icon caret-left icon caret-right instagram pinterest linkedin facebook x goodreads bluesky threads tiktok question-circle facebook circle twitter circle linkedin circle instagram circle goodreads circle pinterest circle

Playing Author

To Write or Not To Write

For a while in May I contemplated making a serious run at soliciting keynote speaking opportunities. My agent, Regina Ryan, and my independent publicist, Joanne McCall, both pointed out that many authors make their real income in that way, not from book sales. A woman at an Austin Forum on Science and Society meeting pointed out that doing keynotes is a "real job." I ended up voting with my body. I turned back to writing on my father's biography. It's always been writing that centers me. Still, if another keynote opportunity fell into my lap, I would pursue it. Just saying.

 

On May 7, I attended a dinner meeting of the Austin Forum Board, of which I am a lowly member. I fell into an interesting conversation with William Fitzgerald and Stephanie Scales of Bárd, a technical writing consulting company. They are trying to compile a catalog of human intelligence which they have provisionally titled "Human Documentation." At a previous board meeting, I had teased William by saying that Human Documentation was a rather meh title. I rashly promised to come up with a better one. At this dinner, William teased me back, pointing out that I had not done so. A long discussion ensued.

 

That night, I awoke in the middle of the night with various thoughts racing through my sleep-addled brain. I thought that a catalog of human intelligence does not capture the breadth and depth of the topic. In pondering this, it seemed to me that Homo sapiens are a way point, not the end of human intelligence. One can consider where and how human intelligence will go in the future, by pure biological evolution or by melding with machines. It roiled in my head that a while a catalog of human intelligence is not an infinitesimal point, it is a very small dot in the continuum of intelligence that begins with stromatolites, bacteria, and continues to plants, trees, animals, humans in the past and present and humans beyond in the future, other biological intelligence, extraterrestrial of all sorts, biomarkers less intelligent than us but also the possibility of hugely advanced biological intelligence and biological/machine melds. How, my sleepy mind asked, can one establish clear boundaries between human and "other" intelligence. What is the difference between machine ASI and biological ASI? That led me to sleepily ponder the question of the meaning of human. Human as opposed to what? "Inhuman" does not intrinsically mean evil but could encompass alien as well as machine. I also found myself thinking about the relationship between "intelligence" and "creativity." Creativity seems to involve thinking things that have never been thought before, but of course much creativity involves extrapolating things that have been thought or done before. How, I asked myself, do you encompass art in the context of intelligence? A popular exercise is to think of things that humans do that machines cannot, an increasingly small set. Machine thinking may involve things that no human can or has done. Already we have machines that can strategize in a manner that no human has or can do. Prime examples are the products of DeepMind like AlphaGo Zero or AlphaFold. I fuzzily concluded that the dimensions of intelligence are huge, less than, comparable to, or greater than current human intelligence, and, that there is diversity even among humans. I found myself conflating intelligence, thinking, and creativity, never mind consciousness.

 

What a jumble.

 

I wrote a summary of this sleep infested core dump to William and Stephanie the next day. Who knows what they will make of it? What I did not do was come up with a better name than "Human Documentation."

 

On May 19, I finally formally registered my novels, The Krone Experiment and Krone Ascending and The Path to Singularity with Created by Humans. Created by Humans is an organization that promises to handle licensing that ensures that some sort of royalty is paid by firms that use an author's work to train their AI LLM models. I don't know whether this will work or not, but it seemed a useful experiment. I had vetted the notion of registering with Created by Humans with Regina Ryan. The registration process required some to-ing and fro-ing by email, but I got it done.

 

On May 29, I participated in another Austin Forum book discussion, this time on Reid Hoffman's new book, Superagency. Hoffman is a tech titan who founded LinkedIn. He has an optimistic view of what AI will do for humanity, as long as we avoid all the existential threats.

 

I spent most of my writing time in May working on father's biography. I discovered a bunch of correspondence dating back to the mid 1920's and am trying to incorporate that into what I've already written of that era up into the 1950's when he witnessed the first hydrogen bomb, Ivy Mike. One challenge has been the correspondence from my beloved Grandmother Wheeler, Vernie. Vernie had the charming but frustrating habit of dating her letters with just the day of the week. A typical entry would be "Sat. P.M." I engaged in considerable detective work using other correspondence and the text and context of her mail to see where it fit chronologically. One letter was sent on a Tuesday after she returned from voting. I checked the calendar. Aha! Elections always happen on Tuesdays in November, and I deduced we were talking about midterm elections on November 2, 1942. I went on to other things, but this rattled around in my head. There were some things that didn't quite fit. Finally, I went back and realized that she was talking about Tuesday November 2, 1936. I'd been off by six years.

 

I'm posting examples of technology advances every weekday on X and LinkedIn, my quest to document the exponential growth of technology. Spoiler alert. It's still growing.

 

Be the first to comment

That Was the Week That Was

The last week of April was packed with various activities.

 

Chloé Hummel, my publicist at Prometheus/Global Pequot, emailed that she was moving on, as ambitious young women in the book business are wont to do. I enjoyed working with her and wish her luck. We were just starting a project to try to promote bulk sales to companies. I waited a decent interval to see if Prometheus would provide a new publicist, then wrote my editor. No response. After a month, I wrote to my only other contact, a fellow in productions. He did not know the situation but linked in a marketing director. It has been another couple of weeks. No response from anyone. My book is six months old, there is a new season, I'm being dropped.

 

I got a wonderful note from Neil DeGrasse Tyson saying that I had a standing invitation to be on his podcast, Star Talk, if I were sometime in New York. I replied that I would get myself there if we could line up a time. I'm awaiting that development.

 

Before I retired, I was a member of The University of Texas at Austin Academy of Distinguished Teachers. I still attend their weekly conversational lunches when I can. The Academy sponsors a program called Reading Roundup wherein faculty meet with incoming freshmen just before the start of their first term to discuss a book chosen by the Academy member. The seeds of The Path to Singularity were planted in such a get together, as described in the preface. I stopped doing Reading Roundup when I retired, but when I got the invitation to Reading Roundup this year, I realized that it would be great fun to talk about The Path to Singularity, so I signed up to do so in the fall. I'll report on that in a future post.

 

In an interesting surprise, I received an email from Juan Serinyà, Chief Technology Officer of Tory Technologies, a Houston company that writes control room management software, primarily for the petroleum business, with clients in the US, Brazil, Columbia and elsewhere. Juan has Catalonian roots, was trained in Venezuela, and has been in US for 30 years. He was in Austin for a conference and ran across The Path to Singularity in our independent bookstore, Book People, a remnant of my doing a book signing there. Juan said he was interested in the topics of my book and wondered if I might be willing to give a keynote address at his client meeting in August. Hey! Is the Pope Italian? Despite the prospect of Houston in August, I replied with an enthusiastic yes. He asked about my fee. I have never done such a thing but recognizing that while Neil DeGrasse Tyson is a friend of mine, I'm no Neil DeGrasse Tyson, I named a number that seemed neither embarrassingly small, nor overambitious. Juan said, "we can handle that." I should have asked for more. We've signed a contract that spells out what Juan would like to hear me talk about and that is exactly what I would like to say. They will pay my expenses and agreed to cover the cost of a rental car and the time of my son, Rob, to drive me, the equivalent of an Uber. I'm shy of driving long distances by myself these days. They will set up a table where I can sell and sign books. There will be 50 clients, so I'm trying to think how many books I'd need. I'm exploring getting a Houston bookstore to provide the books and handle the sales and romancing the notion of setting up a book signing in an independent bookstore in Bastrop which is on the way to Houston from Austin.  I'm really looking forward to it, including gently raising climate issues to a bunch of oil people. I'll do a blog on that when it happens.

 

I went to a talk by Dr. Aubra Anthony, a Senior Fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. She spoke on "responsible AI," asking "responsible on whose terms?" She stressed the cultural differences around the world that complicate the topic, pointing out that AI LLM models developed in the Global North might not be totally appropriate in the Global South.

 

I attended a Zoom call book discussion sponsored by the Austin Forum on Technology and Society. The book was Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future by Hamilton Mann. This book also addressed cultural differences with regard to AI and social issues, arguing that AI Integrity involves culture and is context dependent and that given the complexity of both machines and people, perfection is hard to reach.  The author advised accepting that society will lag technical status and to be practical about what is most doable in policy and regulation, given that perfect will not be possible. The goal should be minimizing the severity of the technology/society dislocation. He called for avoiding systems that can manipulate and deceive. To that I say, too late! Recent LLMs lie and deceive. I advocated the Golden Rule for AI I invented for The Path to Singularity, "Do unto AI as you would have it do unto you." The author asked how to prevent malicious use of AI but did not answer the question directly. A small technical quibble. The author claimed that the global market size for AI is expected to be $2,575.16 billion by 2032; 6 significant figures? Really?

 

I read a longish online essay by Dario Amodei, the founder and CEO of the AI company Anthropic that produced the LLM AI, Claude. The essay covers many of the same topics I do in The Path to Singularity, but with interesting, complementary insights. You might find the second section on neuroscience and mind especially interesting. I also started reading a long amusing, cartoon illustrated presentation on why Elon Musk created his brain/computer interface company, Neurolink. I bogged down despite being entertained and even a little educated. I need to get back to it.

 

I joined an online MIT-sponsored webinar with Sherry Turkle. She discussed the issues with having chatbot friends. She regards this as an existential threat, arguing that children developing their own sense of empathy should not use chatbots that have no true inner life. People have an inner life, chatbots don't. Among her admonitions and declarations: Don't make products that pretend to be a person. Require/request engineers to write a memoir to connect them to their own inner life. No good therapist asks a patient, are you happier after our interaction as chatbots do. Criticize metrics of the use of chatbots. Effect on civil society – terrible, terrible, terrible. To make people angry and keep people with their own kind; could not be a worse algorithm. Guardrails – companies invite people to invent their own AI. Pretend empathy is not empathy. Chatbots don't have a body, don't have pain, don't fear death. Chatbots are alien. Not human. The woman has opinions. I share many of them.

 

What a week that was!

Be the first to comment

Using ChatGPT

Retirement continues to be a golden time of calm and relaxation (not!).

 

I've settled into a regular schedule of first thing every morning posting a tidbit on a Tech Advance to illustrate the exponential change of technology. I introduced this practice in my class on The Future of Humanity by asking students to bring examples to each class. In my current mode, I keep notes with links to items when I read them in the NYT, the Austin American Statesman, the MIT Technology Review, other magazines, or online. Each morning, I transfer a new one from my notes to my web site, then use the free ChatGPT to draft posts to X and LinkedIn. The drafts usually need a little editing, especially for X so that it fits in 280 characters, but, with luck, the whole process only takes about 10 minutes. I still have not opened an account on Blue Sky.

 

A week after the provost's reception for university authors I mentioned in the previous blog, I went by the provost's office about 4:30 in the afternoon to pick up the copy of The Path To Singularity that had been displayed at the reception. I thought I was there comfortably before closing, but the door to the provost's suite was locked even though I could see a receptionist inside the heavy wooden and glass doors. I let out a not so sotto voce oath. After a frustrated moment, a door to my left rattled and out came the provost from a rest room therein. Amazingly enough, she recognized me from the reception, at least the mustache if not the name, and gave me a friendly greeting. I explained my dilemma, and she said she could fix that and promptly wielded a key to gain entrance. She asked the receptionist to fetch my book from a back room, and off I went. The university appointed a new president the next week, and the provost was promptly let go. Presumably, the new guy wanted his own provost. I only met her those two times in the five years she was provost.

 

On 2/12/25, I listened into a Zoom webinar sponsored by the Authors Guild on opportunities to prevent books being scanned for AI training without compensation. The Authors Guild has created a sticker labeled "Human Authored," that can be designed into or affixed to the covers of books. The notion is to provide a mark of literary authenticity that will certify human creativity in an increasingly AI world. The Authors Guild also has a draft clause for publishing contracts that prohibits AI training uses without permission. This webinar was designed to introduce the partnership between the Authors Guild and a new startup called Created by Humans that proposes to license books and negotiate compensation for authors who agree to have their work used for AI training. I've asked my agent for The Path to Singularity, Regina Ryan, to consider this, and she will consult with fellow agents. She says, "It's brand-new territory!" I'll try to register my novels, The Krone Experiment and Krone Ascending, since I own their rights myself. I have tried to ask ChatGPT (again, the free version) and Claude questions about somewhat obscure characters that one would have to have read/scanned the book to know, and they both gave wishy-washy answers. That suggests, but doesn't prove, they have not (yet) been scanned and ingested in some way.

 

On 2/18/25, I had an outpatient treatment to shock my heart out of atrial fibrillation and back into regular rhythm. All went smoothly. Loathe to miss an opportunity, I took a small bunch of my business cards advertising The Path to Singularity and handed them out irregularly to attendants and nurses. I think I may have sold at least two books. One was to a young Vietnamese nurse who did basic prep work and was especially interested. She expressed anxiety about AI, but didn't know what to do about it. Another was an older nurse with a mild southern accent in the cardiac unit who expressed similar feelings - anxiety and uncertainty. I told her that Path was a primer designed for people like her and urged her to be "aware." I was going to lobby the doctor who did the procedure, but they knocked me out and I came to in the recovery room without ever seeing him. On the way home in the afternoon, it occurred to me that hospital staff might represent an untapped market for the book: intelligent, technically-oriented, curious, caring people. I don't know an efficient way to reach them, but I'm open to suggestions.

 

On Saturday morning, 2/22/25, the dean of natural sciences held a donor reception. My wife and I had given the university funds for a small graduate student fellowship this year. By this time, the dean was no longer dean, but a one-day-old interim provost, having been appointed to replace the previous provost (see above). He is a very good guy, the son of an astronomy colleague, but still. Once again in shameless shill mode, I handed out a few book business cards.

 

On 2/25/25, I sat in on a book discussion sponsored by the Austin Forum on Technology and Society. The discussion leader was Geoff Woods on his own book, The AI-Driven Leader: Harnessing AI to Make Faster, Smarter Decisions. Woods advocated a particular use of LLM AI to address problems. He called it Context, Role, Interview, and Task, acronym CRIT. His notion was that an LLM user should not just ask the AI a question but give it a context and assign a role to the AI emulating a particular kind of appropriate problem solver. The critical step, according to Woods, was to then have the AI interview the user and set "non-obvious" tasks for the user. That did seem novel but easy to implement. The next morning, I submitted to ChatGPT the following prompt:

 

#CONTEXT# I'm an author in Austin, Texas, a retired academic, trying to write a new book, promote a current one on the technological future of humanity (prometheusbooks.com/9781493085439/the-path-to-singularity/), write occasional blogs, maintain a website (jcraigwheeler.ag-sites.net/disc.htm), and post daily "tech advances" on X and LinkedIn calling attention to the exponential growth of technology. I've done 13 podcasts on the current book, a book signing, a couple of public appearances and applied to a couple of book festivals. I can't do it all at once, and work on my current book has fallen way behind. I'm limited in my ability to travel. My current publisher has a publicist who is helpful but limited in what she can do. Same for my agent. I'm not sure my website is as effective as it might be. I have little time or talent for SEO. I'm posting blogs on my personal website, not a nationally recognized hosting platform. I can't afford to hire an aide. #ROLE# You are a self-help expert with expertise in SEO, publicity, book writing, and promotion. #INTERVIEW# Interview me, ask me one question at a time up to 5 questions to gain deeper context. #TASK# Your task is to generate 5 non-obvious strategies I could employ to make my life easier and more productive.

 

ChatGPT summarized my prompt and posed five questions, allowing me to answer each before proceeding to the next question. ChatGPT then produced five "non-obvious" suggestions:

1. Target Influencers Within Companies via LinkedIn Automation

2. Outsource Your Book's Promotion to Micro-Influencers with a Twist

3. Digital Outreach Campaign with Email & LinkedIn Automation

4. Hire a Virtual Assistant for a One-Time Setup of Automation

5. Repurpose Content for Broader Platforms

 

Each suggestion had an associated strategy and a short explanation of why it works. At the end ChatGPT asked, "Would you like more details on any of these strategies, or help finding the right freelancers/tools to implement them?" I said "yes," and it spit out three pages of elaborations including suggestions of specific software programs that were all new to me. It then said, "If you're ready to explore any of these strategies, I can help you find a specific freelancer or walk you through the tools more in-depth. For example, I can assist with selecting the right virtual assistant, or help you get started with LinkedIn automation tools. Would you like me to assist with that, or perhaps help prioritize the strategies based on your immediate goals?" Again, I typed "yes," and it produced six more pages of yet more detailed suggestions. I think all this is doable, but I was overwhelmed and set it aside. Maybe someday. I'll need to take it a step at a time.

 

On Friday evening, 2/28, and Saturday morning, 3/1, I attended the semi-annual meeting of the department and observatory Board of Visitors. The BoV is a group of about 200 people of some means and often political influence who enjoy engaging with astronomers and working on behalf of our enterprise. Once again, it was an opportunity for some more shameless shilling. For both days, I put out a copy of The Path to Singularity on a book holder along with a small pile of the associated business cards. I also handed the cards to anyone whom I thought might be interested. I might have sold a few books. A few people had already purchased one.

 

 

 

 

 

Be the first to comment

Hat Trick

Poodle-chewed book

 

A friend of mine, Elaine Oran, bought a copy of The Path to Singularity. Her poodle, Cooper, got to it first. I think he enjoyed it.

 

I pulled off a hat trick in early February, three back-to-back podcasts on Wednesday, Thursday, and Friday (February 5, 6, 7), plus a reception Thursday afternoon.

  

The reception was an annual event hosted by the university provost to celebrate faculty authors. There were about 50 authors, although I think fewer than that attended. My book was on the left rear of an array of four tables, the only one from the College of Natural Sciences. Mine was also the only one accompanied by the little business cards that my agent Regina Ryan suggested I make up, which I set out when I arrived. Thanks to the cards, I think sold a few books. I met the provost, chatted with the vice president for research, astronomy colleague Dan Jaffe, and a half dozen other authors, one of whom was an Hispanic woman, K. J. Sanchez, a playright. She has written a play about a female astronaut who is stranded on the Moon. I'll try to attend a performance of that in the spring. I also chatted with Bret Anthony Johnston, a writer at the UT Michener Center, whose latest novel We Burn in Daylight is based on the 1993 federal siege of the Branch Davidian compound in Waco. He took one of my cards.

 

Between preparing – drafting answers to pre-posed question – and following up (which took some time), the three podcasts were a bit taxing, but they all went well. The hosts enjoyed the conversations, as did I. All had scheduled about 30 minutes, and we instead ran for 50 – 65 minutes; all asked to have me back. The themes are, of course, all similar, but each host had interesting variations, and I learned some interesting things. 

 

The first on Wednesday was Brandon Zemp on the BlockHash podcast arranged by my Prometheus publicist, Chloé Hummel; https://tinyurl.com/4fwbj9vr. This was done with StreamYard, a program I had not used before. I shrank the whole screen and moved it up near my camera, removed my glasses, put the mic front and center. I'm getting the hang of this. Brandon is a young American working in Mérida, Columbia. He'd read the whole book, and we touched on jobs, AI ethics, strategizing, lying chatbots, brain computer interfaces, designer babies, and the space program. We talked about AGI, and I worked in the notion that things are changing so fast that we are entering a new phase of humanity when we cannot adapt to our new technology. I also brought in the notion of strategizing, lying, deceitful chatbots. We talked a bit about brain computer interfaces, and I warned against developing a hive mind that would lose the organic power of independent minds thinking independently. This might be an issue for AI as well, it occurred to me, if they all link together. We talked about designer babies and seeking the cure to aging and possible downsides that need to be carefully thought about. I made my case that there will be no Homo sapiens in a million years, or much less. Brandon got that argument. I repeatedly called for "strategic speculation" to anticipate issues. I meant to say to "avoid unintended consequences," but forgot to. We talked about Musk's goals of cities on Mars. I said I was sure we would become an interplanetary species (Homo europa? Homo vacuo?), but I was less sure about being an interstellar species because of the limits of the speed of light. I wanted to talk about whether AI can hold patents, own companies, and vote, but we didn't get to that. See below. Brandon threatened to ask about my favorite chapter but didn't. I was ready to bluster that was like asking me to name my favorite child, but I would have picked Chapter 2 on the nature of exponential growth; it is so fundamental. Brandon had an interesting story about humanoid robots. He noted that when people were seen mistreating these robots, other empathetic people got very upset on the robot's behalf. Our tendency to bond with our machines (Squeeze Me Elmo) is an interesting related issue. In this case, I got to invoke the slogan I had invented in the book, "do unto AI as you would have it do unto you." I hadn't known quite what I meant by that, but Brandon gave me a nice AI ethics context, flipping the normal script of AI alignment. Be nice to AI.

 

Thursday was Izolda Trakhtenberg of Your Creative Mind, again arranged by Chloé. Izolda had an interesting story of her thinking of purchasing an item, but telling no one, and then finding ads for the item appearing in her feed. A very effective predictive algorithm? I'm still thinking about that. I told her Zemp's story of the maltreated robot and empathetic response, and we talked about two-way AI ethics. She'll post the podcast in late March or early April.

 

Friday was Dan Turchin of AI and the Future of Work, a hold-over arranged by publicist Joanne McCall. Dan claims to have an audience of a million people, not just total over 300 episodes, but per episode. I'll believe that when we sell 1%, 10,000 books. Dan requested that I rate and review an old podcast of his. He says it will improve the discoverability of my episode. I tried to do this but got tangled up over access to where and how to post comments on Apple Podcast and Spotify. Dan advocated a notion that "employment is dead," that rather than top-down rigid management structure, jobs will be more voluntary, subject to "snapshot voting" and open to "wisdom of the crowd" procedures. I'm still thinking about that. It's apparently a Millennial thing. It reminds me of the approaches that Minister of Digital Affairs Audrey Tang brought to the functioning of democracy in Taiwan. Dan was also sure that we would see "AI citizenship" before we saw people on Mars. I voted the other way. Dan said in a later email that he is not in favor of AI citizenship and is himself opposed to technology that blurs the human-machine boundary, but that he knows a "small cadre of ethicists and attorneys who are advocating for bot rights." I'd made a speculative extrapolation to AI voting in The Path to Singularity. We have not heard the end of this issue. Dan says he will post his podcast in about eight weeks. 

 

I've now done thirteen podcasts, a radio program, and a book signing. Whew! Not sure I've sold many books.

 

 

 

 

Be the first to comment