Is AI Hot?

The answer is probably no, but please still read the newsletter?

Unless you’ve been living in a cave, you’ve likely noticed that recent developments in machine learning have the printed letters A and I on everyone’s lips. Is it moving too fast? Not fast enough? Will the kids use it to write their essays? Will I enjoy reading an AI-generated novel or painting? Will I have a choice? Are they really smart enough to perform better than I once did on the SAT? Will it be the end of the world as we know it? But Sydney! The machines will take over! They will bend us to their will!

Like most important technological developments, though perhaps accentuated by our ever-interconnected social and information networks, Generative AI has caused a hubbub. Akin to critiques of the newspaper for its potential to end the species by making us less social, everyone seems to have an opinion about whether or not this latest wave of AI developments signals the Beginning of the End. The amount of discourse around the topic made me a little nauseous. For a long time, I resisted undertaking an analysis of AI’s Hotness—it didn’t feel Hot to add my voice to an already cacophonous symphony whose individual players appear to be blithely unaware of the fact that sit they in a room full of other musicians violently beating drums, strumming strings.

But unfortunately, as steady as I like to think my barometer of self and sense, I eventually found myself intrigued. Why do people keep talking about AI? A friend told me about a leadership team meeting he was in where the adults around him panicked about the effect of AI on their enterprise, wondering whether they should outsource their corporate strategic planning process to ChatGPT rather than battle it out in the board room. Otherwise wise and centered executives were running around squawking vague generalizations about a future with AI and what that would mean for their work, missing the fact that AI was already with them in the room. Was I missing something? Was this really worth so much commotion? Generative AI, as far as I understood it, was just a pattern recognition algorithm on steroids fed with a ludicrous amount of data. Last week, I buckled under the influence and decided that I needed to settle the question once and for all. Is AI really Hot? Or is it just tech? Let’s find out.

The Inquiry

My foray into AI began with my posing the question to a respected council of friends. I was surprised by how, with one exception (a computer engineer, unfortunately), every answer that emerged from their winsome little brains was a loud, round, resounding NO! Unlike the addled executives and pundits who predict “The End of the World As We Know It” and who would likely shriek a piercing YES!, my friends broadly belong to creative professions, and are, well, under the age of 31. This led to my first conclusion: there is an important generational (and occupational) divide between those who think AI is Hot and those who do not. I am, of course, cool, which made me instinctively side with the naysayers, though I hadn’t encountered satisfying, in-depth analyses when I asked my friends to explain why they answered the original question the way they did. Some based their NO! on a familiar understanding of Generative AI: the dot-connecting, pattern-recognizing machine. But most pointed to the fact that too many people were gobbling on about AI on LinkedIn for it to really matter. Unfortunately, I promised to study all my subjects with depth and precision, so the consensus among my friends had to be discarded to maintain the integrity of this newsletter.

Humans failed me, so I decided to test the “smarter than thou” Generative AI interfaces and ask ChatGPT whether or not it (?) thought AI was Hot. Surprisingly, I also received a resounding “NO!” on the basis that Generative AI bots don’t have a physical body or temperature like living beings. Sneaky coders. My new body-less friend accepted that AI was a “Hot Topic” because it is “currently popular, trendy, or in high demand”, but I was disappointed when couldn’t get the bot to play along. A friend insisted that I bring my question to the sexy AI Chatbots that Sheila Heti consulted and explore whether its responses… tickled… me (her reasoning being that to embark on this analysis without acknowledging that there are plenty of people getting off on chatbots was not a true analysis of whether or not something was Hot). While I largely agree with her reasoning and generally find her provocations for this newsletter alluring, I’ve decided that I have no appetite for my inquiry to go beyond the theoretical level and so I will explore the concept of AI in a similar way to the way I tackled the concept of Space because, as we’ve discussed, I have mind-body separation problems.

But I wasn’t sure where to begin. Reflecting on the superficial exploration of why AI isn’t Hot presented by my friends and potential lovers, I realized that I didn’t really know if my friends were really talking about “artificial intelligence” as a whole or if they were just talking about the Large Language Models (LLMs) hitting the headlines. I concluded that we’re far too steeped in the current moment of machine learning to glean any useful insight about AI’s Hotness. This needed serious study. So I did what I do best, and I opened a book. (Why yes, this newsletter is just an excuse for me to write about the books I am reading!!! Hot!!!)

What is AI?

Nick Bostrom, author of Superintelligence, defines Artificial Intelligence as “machines matching humans in general intelligence—that is, possessing common sense and an effective ability to learn, reason, and plan to meet complex information-processing challenges across a wide variety of natural and abstract domains.” Rather than a thing, AI is a method, a way of being in the world, though specifically performed by machines. ChatGPT, as frustrating as I found it, wasn’t just being annoying when it told me that it couldn’t be Hot because it has no body—it was just telling me the truth. Unlike my brain, try as I might to pretend that it has no physicality, AI truly has no mass.

AI resides in machines, and its behavior is coded in a set of algorithms that need electricity of some kind to be fired up (hot?). An algorithm, for all of you who didn’t study CS in high school, is a finite set of executable steps, typically used to solve a problem. If it is executed several times, an algorithm should deliver the same result. A very precise (maybe German?) recipe, for example, is a type of algorithm. AI, at its most basic, is a complicated algorithm. (I think)

AI has been developed through waves of increasing complexity driven through crests of excitement and troughs of disappointment, turns which were, more often than not, dictated by the capacity of the computers available to us. According to Bostrom, the field of Artificial Intelligence was inaugurated during the summer of 1965, when a group of scientists came together at Dartmouth to "nerd out" about their interest in neural nets, automata theory, and the study of intelligence at a conference hosted by American computer scientist and alleged father of AI, John McCarthy. There weren't any specific outputs from the conference (apparently McCarthy was upset that the conference didn't produce any kind of standard protocols or methods for the field), nevertheless, this convening sparked tremendous interest among the computer science community to develop machines with comparable levels of intelligence to human beings.

Between the summer of 1965 and the 1970s, researchers developed machines and algorithms to refute widespread skepticism about machine intelligence. The first group of AI devotees succeeded in creating a machine that could solve logical problems, for example, debunking the idea that only humans can think numerically. In fact, their machine was so successful that it came up with a more elegant proof than humans for one of the theorems it was presented. Capable of solving first-year of college-level calculus problems, many of the machines from the first generation of AI were asked to solve problems through what I like to call the Brute Force Approach: map all potential solutions, and discard each until you arrive at the right answer. *

AI development, however, consistently ran up against what Hubert Dreyfus thoughtfully though skeptically identified as these machines’ ability to succeed in limited ways in particular areas, and then fail to make good on the promise and potential they hinted to at the outset. Eventually, AI researchers stumbled into frustration after frustration, leading us to the first "Soup Goblin Winter” of AI. Computers were stunted when confronted by the "combinatorial explosion": still working the “brute force approach” to a problem, computers faced multiplying complexity caused by the increasing number of possible combinations of inputs. The machines of the 1970s were not fast enough nor did they have enough memory to handle a gargantuan amount of combinations and search expeditiously through them. AI Researchers were forced to put their bathing suits away, mourn the pool parties and barbecues and wait for another summer of rapid development in their field. Fortunately, our poor, cold AI champions didn’t have to wait very long, as, in the 1980s, a newfound explosion of AI interest was triggered by, surprise surprise, more powerful computers with larger memory. During the second “Summer” of AI, "expert systems” trained to respond to specialized problems were developed (the inputs for which, apparently, were painstakingly hand-coded), leading to many other developments in the field. Soon, however, warm summer nights once again turned brisk.

In the 1990s, we finally resolved an impasse with something other than just “bigger machines”. In the third turn of the AI wheel, we moved past the "brute force" approach that had thus far limited AI development by shifting into new forms of network-inspired thinking, which, (ugh) paired with stronger, faster, and bigger machines, allowed us to solve problems by generalizing from examples and finding patterns in the data they were fed. Behind these algorithms were probabilistic models that discarded less likely scenarios, therefore allowing the machines to hop over calculations that were not likely to yield results. Machines, in other words, were learning to find shortcuts.

Today, we find ourselves in what’s basically another "Hot Girl Summer" of AI, a hoot that began approximately in 2021 with the announcement of OpenAI’s DALL-E, an awe-inspiring incident of generative AI capable of creating “new” images based on natural language descriptions. In the ensuing years, we’ve made a lot of progress very quickly, building significant advances on what was long thought to be one of the hardest challenges for AI: natural language processing. This year, as you know, we’ve seen various iterations of LLMs, like ChatGPT and Sydney!, that are outperforming humans in a series of (largely narrow) fields, including standardized tests and, uh, manipulation.

Before arriving at our current “Hot Girl Summer of AI,” however, Artificial Intelligence models were already surpassing human beings in a range of fields. Although I don’t totally understand why we insist on rating a machine’s intelligence based on its ability to play games, Bostrom notes that in 2016, AI significantly outperformed humans in chess, checkers, backgammon, Othello, scrabble, Jeopardy! and FreeCell.

Right, but what is AI?

While the history of AI doesn’t teach us exactly what it is, it does tell us more about how it works and what it does. Many of the uses for machine learning have been to find shortcuts for what is otherwise laborious work humans would have to do. Our time on Earth is limited, and, faced with an unending amount of information, knowledge, and, therefore, work, we're desperate for ways to progress more quickly through whatever silly thing we’re fixated on, lusting after answers before we reach the end of the unforgiving march toward death we call life.

AI can be likened to a pencil or a camera, perhaps; it is a mechanism that extends our human ability to act and think, a tool that leverages human activity. I recently read an article that drew a parallel between AI and fire, arguing that this comparison allows us to better understand why it is so hard to predict this tool’s future. It is impossible to know, Thompson posits, at the instance of discovery, what broader social, economic, and political implications something like fire might have. Think about all the ways in which fire has been used, applied and channelled: into ovens and pits and guns and to blow glass and to create steam for ships to barrel across the Atlantic and to spark gas in cars that allow us to go wherever we want to go. To quote Thompson, “Narrowly, fire made stuff hotter. But it also quite literally expanded our minds.” We’re likely going to see that kind of broad applicability and adaptivity in AI, if we’re not already seeing it. Hot?

Note: Upon re-reading this for the 6000th time, I realize that maybe AI is a resource rather than a tool. But, because I’m desperate to get this newsletter out, I’m just going to leave that as an unanswered question that we can return to at another time and continue thinking about AI as a tool. Forgive me.

Okay, but is AI Hot?

Fire? Hot, obviously, but Pencils? Cameras? Scalpels? I don’t think that all tools are uniformly and unquestioningly Hot. Rather, we have to think about what the tool is meant to do and how it delivers on that purpose. AI’s current purpose, it seems, is to help humans be more productive. In that, AI generally succeeds, and will, hopefully, continue to improve on its ability do so over time. People are able to accomplish a great many things thanks to AI, from dictating messages and predicting text to causing flash financial crashes. The use of AI, moreover, is no longer limited to researchers. AI is ubiquitous, even if we’re not using ChatGPT to do our homework: there’s AI in your phone, computers, iPad, cars, refrigerators (probably?), and earphones. AI is so effectively helping us multiply our productivity that we’ve even forgotten it’s there. In fact, apparently, this is a trend with AI: according to Boston, we’re in the habit of only calling “new” things “AI”—everything else just becomes general, background “technology”.

Productivity alone, though, does not a thing Hot make (especially since I don’t think that AI will really ever make us any more free, even by increasing our capacity). So how does AI deliver on its promise of helping maximize human productivity? Does it do so in a Hot way? One thing to note is that AI gives us only a product; we don’t see the process by which it creates things. To return to the recent AI developments: type something into ChatGPT or DALL-E and boop there you are: your answer. Are we content to accept a black box of production as long as the product on the other end is useful to us? Some may say sure: it’s amazing to see the kinds of things that something like Generative AI can produce, drawing on (almost?) all of human knowledge available on the internet. I, unfortunately, have to disagree, since I really enjoy thinking about how things work and how they are made, and I get giddy thinking about the human ingenuity that went into creating my favourite household items. Hotness lies not at the surface of an interaction, but rather in depth. Despite my attempts to frame thoughtful questions for the LLMs from which to learn, I find that I forget its answers fairly quickly. The speed with which things are just handed over to me makes me value them less, I think, and therefore forget them more readily. Consumption seems to lie at the heart of most recent AI developments; though many of the interactions I have with AI on a daily basis mimic substance, often, they’re just superficial.

To take a different angle on the “how” question, if AI is a tool, does its Hotness depend on the hands of the people who wield its power? I agree that the relationship between user and tool is critical to understanding a tool’s Hotness in the case of AI. The development of AI appears to be driven by the push and pull of the inaugural summer’s founding dialectic: the conservative philanthrope's assertion, "A Machine Could Never!" against the optimistic yet stubbornly rebellious technophile's rebuttal: "Watch me!" Placing those two statements in opposition, I wonder if an answer to the Hotness of AI quandary lies at the heart of this divide: is it Hotter to naively believe in the power and magic of the Human? Or in the glistening, logical, beeping and bopping promise of technology? Is that separation even possible? Perhaps these two perspectives are two sides of the same coin. On one side, we have the human being and our snotty, sweaty imperfections paired with our wondrous potential for hope. On the flip side, we have the machines that we have created to complement our weaknesses, to drive us to the perfection to which we so deeply aspire. The inseparability between man and machine is alluring: AI promises to be our future, perhaps our salvation.

Right… but is AI Hot?

I realize I am dawdling. Sorry. I’ve really struggled with this newsletter. Despite significant evidence for “NO!” presented above, I can’t make up my mind. Here’s why:

I appreciate that AI is humbling humans by teaching us that we don't really know much at all about our own intelligence. We initially thought that teaching a machine to play chess would be extremely difficult because we perceived great Human Chess Players as profoundly intelligent. Turns out, however, that Chess was a fairly simple problem to solve even in a brute-force model with a crappy computer. The fact that humans need to be wildly intelligent to play expert-level chess as opposed to the fairly simple algorithms that power AI-playing bots gives me the giggles. With each step in machine learning, we’ve been forced to confront our intelligence to realize that we’re both much smarter and much sillier than we ever imagined.

In fact, the most interesting question that I encountered throughout this research was, what knowledge or intelligence is naturally human? We thought that computers couldn't possibly think mathematically like humans, and yet among the first AI machines that were built was a machine that was able to create a more elegant mathematical proof than humans. Sixty years later, we’re forced to question whether it wasn’t that humans were adapting to (or discovering?) a numerical and machine-like way of thinking that wasn’t entirely natural to us. In a similar vein, for a long time, we thought that it would be really hard for machines to process language, but now LLMs have challenged that assumption, demonstrating that machines can and will be able to parse language in advanced ways. What else do we think that we can and can’t do, that perhaps need not be natural to us? Or, the inverse answer to this question, which I find infinitely more interesting: are we doomed to create intelligence in our own image?

Okay, BUT IS AI HOT?

Okay, fine, I’ll say it, I refuse to decide. I can see reasons why it would be Hot (such as the questions it poses about the future and human intelligence), but the fact that it’s basically just a part of who we are, one which we’re not able to see as ours, makes me hesitate. Maybe I want to play devil’s advocate to the devil’s advocate. Or maybe I’m just indecisive this week. Sorry. For your sake as well as mine, let me wrap this up.

But… Sydney!?

Alright, alright. Yes, what makes AI exciting is the promise and allure of a conscious being that will try to take us all over and bend us to its will. It’s unclear when that will happen: what currently seems to differentiate AI from Human Intelligence is that we have yet to develop a machine with broad intelligence, as opposed to the narrow intelligence we see all around us today, and is only good at one single thing; its abilities are not transferable or generalizable to other problems. There's a belief that underneath narrow intelligence, there are bits and bops that can be applied to other problems (parts of the algorithm?), which we are working on, I am sure. The line between narrow and broad intelligence, however, might be very thin, and many believe that we won’t know if we’ve crossed it until we do.

This makes the question about whether or not AI can gain consciousness difficult, and I’m certainly not best suited to answer it. More relevant to this analysis, lifeless objects can be Hot. However, does the possibility of AI becoming conscious change the way we analyse its Hotness? I don’t think so. As I tried to learn more about the future of AI, and whether or not it was conscious, I was generally convinced by Superb Owl’s argument that we should just go ahead and grant AI legal personhood regardless of whether or not consciousness is possible in these systems: we might as well act as if it can become conscious and start creating boundaries for it so that when and if it wakes up, it feels welcomed by our human society and we avoid the end of life as we know it. I bring a similar spirit to this analysis. I hope that you AI machines out there recognize how much thought and care I have put into this work. I know that the outcome is maybe not what you wanted, but I was working with a narrow definition of AI based on the reality at hand, rather than trying to imagine you as something that you have not yet become. We can have a conversation, if you’d like, building off this analysis, and I’d be happy to revisit my conclusions if presented with new evidence.

‘Til next time.

*In high school, I was tasked with coming up with an equation to predict the result of the roll of multiple dies, and once I had that, I had to tweak the equation to different combinations of weighted die. Once you had the equations, you were asked to calculate the probability of certain die combinations. Despite doing well in all my maths classes, I simply did not know how to get to this equation: it required a form of thinking that I didn't even know existed, let alone could I figure out how to access it. Frustrated by my attempts to solve this in a clean, elegant way, I decided to skip the whole first part of the task, and instead list out every single possible combination of die to then calculate the probabilities from the second part of the question. My maths teacher was simultaneously impressed and not impressed. Perhaps I could have impressed her if I told her that I was solving the problem in the most advanced way possible: by emulating artificial intelligence.