On Living and Working With AI

As we find ourselves in the midst of a paradigm shift with the advent of large language models (LLM’s) and modern AI, I feel it timely to reflect on their practical implications. In the course of writing this piece I’ve found too many aspects of this subject to tackle at once, so this piece will be the first in a series of reflections.

This moment in AI’s story arc particularly interesting because of a lack of consensus on how far reaching its implications are. Opinions vary from “LLMs are glorified autocomplete” to “AI labs are building God”. Those developing and delivering this tech would have us believe we’re on an inevitable trajectory towards a new way of living, a world where AI serves humanity and automates away nearly everything.

In practice, most would concede that the vast majority of ways we interact with digital information has been changed by AI. Personal examples include an anecdote my niece shared about how without ChatGPT tutoring her in physics she’d not have been able to keep her GPA to university admission levels, or my seventy year old father using Meta’s AI chatbot to walk him through repairing his sliding backyard screen door. Setting aside whether most people are aware of it or not, it’s a simple reality that AI has been disrupting our lives long before LLMs became widely distributed (e.g. social media recommendation algorithms). Therefore it seems reasonable to expect that this is just the most recent development on a story arc that will unfold over a long timescale. 

It’s safe to say we’re past the point of considering these things a glorified autocomplete, (even if to some degree that’s a reasonable way of thinking about how they fundamentally work). Today all mainstream LLM products offer “reasoning” and “deep research” capabilities and their results extend beyond just text and audio results as we encounter models that can generate remarkable images and even hyper realistic videos that obey to the laws of physics. Beyond the impressive capabilities, we’re beginning to notice direct impacts on culture and society at large. 

One of the most interesting trends I’ve noticed in the proliferation of this tech is the literal commoditization of intelligence- which itself feels to be becoming more abundant in general. This is a subject I think deserves its own piece but I’ll comment on it briefly: I’m sure many of you may have tried sharing a half-baked thought or idea with one of these chat bots only to have it return the idea to you more fully formed and with suggestions for advancement. If you’re unfamiliar with the capability I’m describing then I recommend the following experiment. The next time you have to take on a small project, like planning a rearrangement of your home entertainment system, your next vacation an event you want to host, fire up one of these chat bots and use it as a brainstorming partner. I think you’ll be hard pressed to do so and not find that it enhances the process in some way. 

In my capacity as technologist building tools for knowledge workers I have witnessed first hand the economic impact that LLMs are having across segments of the economy. The reality is that in the realm of building and deploying technological systems, large languages models have become an essential part of perhaps most layers of the process. During my daily activities I will use an LLM as a fellow systems and solutions architect, a code reviewer, a junior programmer and support specialist, to name a few roles it has assumed. It’s certainly true that the breadth of the tasks that I can complete independently has grown substantially because what were previously very obscure areas of technological knowledge that would require days of research to become productive in, are now knowable within a few hours of conversations with an LLM. Said differently, I can go from rough concept to working prototype in an unfamiliar area of expertise in the span of time it takes me to drink a few coffees. While it’s tempting to opine on what the downstream economical implications of this newfound productivity are I’ll leave that for another time but it’s glaringly obvious that individual technologists are a hell of a lot more productive now than ever before and that trend is certain to continue. 

To comment on the impact LLMs have in the workplace outside of tech I’ll share observations I’ve had in deploying them to serve knowledge workers. A substantial amount of knowledge work is done in what I’ll call the last mile of information: getting some form of unstructured data into a structured format so it can be used in a standard process. For example, drudging through many thousands of pages of medical records to determine who paid for a procedure and when- or reading a packing slip on a box to determine the insurable value of a shipment. One day, all of this information will be produced and consumed digitally and all of the intermediate handlers of it will themselves be on the grid- but that future is farther away than most would imagine. Now, in these settings it’s not enough to simply read a source to find the information of interest, there’s nuance to it and it often takes experience and deep subject matter expertise to know what to pay attention to. And as it happens, this is a task that LLMs simply excel at with super-human speed and endurance. And I’ve encountered endless examples of these types of problems where deploying LLMs has great effect.  The net result is that hours of daily “analysis” that knowledge workers have been doing for most of their careers are now freeing up.

While industry analysts, researchers and the like continue to debate nearly every aspects of these technologies and their implications, it’s apparent that they’re now part of the fabric of daily life and work and that their influence is only increasing. The demand for LLMs aren’t slowing and neither is the pace of their creators in building the next iteration. Despite these characteristics and growth metrics, one can’t help but notice that the economic return on the investment in LLMs are as of yet rather underwhelming relative to what one might expect them to be. We also can’t ignore that despite the purported benchmarked intelligence and capabilities that these AI’s still regularly perform massive intellectual blunders in ways that challenge the notion that they’re any kind of intelligent at all. It’s this strange juxostaposition between seemingly omnipotent “being” and confused robot that I think succinctly describes the strange part of the AI story we find ourselves in.