Welcome back to week 3 of Jab’s Lab. As always, I’m so appreciative that you are reading. Today’s topic is AI, and how it will fundamentally change the way we interact with technology and each other.
I started writing, and had so much to talk about, so I am splitting this post into Part I and Part II. If Denis Villeneuve and Jon Chu can do it, why can’t I?
Part I below talks about the current state of interfaces, why chatbots feel so different than anything that’s existed before, and the limitations of a strictly chat-based interface.
Part II will talk about how to go beyond the existing chatbot interfaces and build AI-enabled products that keep humans at the center of interactions. I’ll talk about how you can incorporate AI into your product and how we can leverage AI to build better user experiences.
Let’s assess where we were and are
Let’s roll the clocks back to the late 90s, when a new product called Google launched. Suddenly you had instant access to all of the knowledge in the world. It changed the way that we interacted, both with computers and each other. The web became searchable, and with it, all human knowledge. Smartphones created ubiquity of knowledge in the late 2000s, with answers at our fingertips at all times, not just in the computer lab.
Fast-forward to 2022, when this new thing called ChatGPT came out and people had no idea what it was. Maybe it was a chatbot, but we had seen many chatbots that didn’t work particularly well over the previous decade. Maybe it was a knowledge hub, though its knowledge was cut off at the time of training (September 2021) and it was prone to hallucinations.
My coworker shared a ChatGPT-generated rap about our company in the style of Eminem and I remember being surprised, entertained, and impressed. I didn’t understand how it would be useful. But it was certainly neat.
Today, the possibilities are much greater than 2022, as we have seen a vast number of resources being thrown at AI development. It is good enough to write a lot of code, it is good enough to assist with scientific research, and it is good enough to “reason” and think about more complex topics. The models are smart.
And yet, despite the vast improvements in the models, the interfaces have not changed much. ChatGPT is very similar to what it was in 2022, with some better ability to upload documents and share more information. Claude has added projects and excel support, and has positioned itself as a more specialized company knowledge base.
But at the end of the day, these interfaces are still the same: I ask it something and it generates some text.
OpenAI and Anthropic are fundamentally AI model companies with some tech product capabilities. While they’re very good at many things, these chatbots as they exist today barely scratch the surface of what AI-based products will look like in the future.
A brief aside on AI models vs. products
Feel free to skip this section if you understand how AI works.
GPT-4o, o1, o3, Claude 3.5 Sonnet, DeepSeek, Llama are all AI models. These models act like a brain. They are a complex set of parameters (think neurons) that take an input and produce an output. They are trained on all of the data to ever publicly exist on the Internet. The way I conceptualize them, models are a summarization engine for all of humanity’s collective knowledge. A brain to contain all information.
ChatGPT is a consumer-facing tech product. So is Claude, Grok (in X), Google Gemini, and many more. These are products that serve as the entry point for us to interact with the models. If the model is the brain, ChatGPT the app is just how we see and hear the thoughts.
When you hear “training a new model” they are talking about creating a new brain. Each iteration of the brain improves the underlying reasoning, thinking, writing, etc. of the model. They release these models and then you can use them in the products that exist. Each new model will make the products that use it better, but it’s important to note that the model and the products are not the same.
As a developer, I can integrate any model into my own code by calling an API, and use the reasoning capabilities of the model. It can take my data, transform it, extract insights, etc. The possibilities for how one can integrate AI into a new product are endless and exciting. Don’t worry if this doesn’t make sense right now, part II will do a deep dive into the mechanics.
What made Google different than ChatGPT
Before we dive into what makes a good AI product, let’s compare Google and ChatGPT at the time of their respective launches.
Google circa 1999 was not that different from ChatGPT in 2022. There was a single text box, and unlimited possibilities. Yet, the feeling you got when you used the two tools could not have been more different.


Using early Google felt like going on a journey. You typed in a prompt, and it transported you to a page full of possibilities. “Showing results 1-10 of 2,500,000” – it might as well have been infinite. It gave you a sense of the scale: how much information was out there, how much it parsed through, and it gave you the 10 best and most relevant links for you. Your response was: “Thank you Google, you are incredible for finding me this information. I’ll take it from here.” You clicked one, it’s not what you wanted, you went back and clicked another link. You clicked around for a while, learned a bit, and eventually found the information you were looking for. You found some other information along the way. You made decisions, you refined your thinking, and you decided what information suited you.
What makes ChatGPT feel so different? You type a similar prompt, and it gives you information, but it doesn’t feel like you’re going anywhere. You wait around as ChatGPT crunches numbers in the back room, only to emerge with a block of text that you may or may not have asked for. You don’t go on the journey with it. The journey happens without you: the endless knowledge is parsed by the model, and you’re given a summary of what it found. For a long time, I couldn’t articulate why this bothered me, but it feels like I’m being left behind as the AI tells me what’s best for me to know. There’s no transparency in how it comes up with its response, which was magical at first, but now it’s a bit disconcerting.
Early Google was a portal to another world. ChatGPT is a world-class summarizer of any topic. I’d love to bring back more magical experiences like early Google.
ChatGPT as an interface
I argue that ChatGPT1 is a bad interface for a phenomenal technology: the AI models.
The models are incredible, beyond anything I thought was possible in this period of time. And I am excited about the possibilities of what we can build using them.
But as an interface, ChatGPT is too much, and also not enough.
It’s too much in the sense that every chat you’ve ever had is saved in the sidebar as siloed, often useless, information. How often have you looked back on a chat and not wanted to parse the whole thing? For me, it is very frequent. And for many things, I don’t care about having a log. I just looked back at my history, and saw one from 2023 about a very specific coding question. It gave me the right answer, and yet I will never reference that chat again. If I’m ever curious about the same topic, I’ll just ask again in a new chat2.
It’s not enough in the sense that the ChatGPT interface does not add much value to the model itself. People can prompt it for what they want, and it gives you an output. Some of the recent advancements with GPT Canvas, Claude Artifacts, etc. give a good sense of how to better organize information with these AI models. But the value of the interface is still just in getting access to the model, and structuring the output in a nice, clean, organized way.
While this is cool, I envision a much brighter future of AI-based interaction.
The future: Chatbots, agents, or integrated AI?
The fundamental value of AI is not in which models we use, but rather what we can do with them.
With advances like ChatGPT Operator, Claude Computer Use, and even open-source projects like browser-use, it will be interesting to see how things change when your AI agent can start to just use your computer as you would: moving your mouse, clicking, using your keyboard. It’s a potential avenue for improvement for the chatbot interfaces. However, these improvements are not there yet3. And in the future, even if we have AI agents doing tasks for you online, I don’t think we will ever want to give up our own agency to do the things we want to do.
In thinking about the feeling from the early Google example above, I believe there are ways to use AI to enhance your experiences, not leave you out of them. We need to be more creative about how we can use AI to build better products, with humans still at the center.
In next week’s article, I’ll share my thoughts on what this could look like, and how you should think of ways to incorporate these new paradigms into your own products. Can’t wait to see you there.
-Cory
Note that I am just saying ChatGPT for simplicity (like saying Kleenex), but this can apply to any chat-based AI product.
Another one I found in my GPT history was “How long does rotisserie chicken stay good for in the fridge?” How embarrassing. I’m sure I wanted to see if it was still good for lunch that day. And now I know the answer. But there’s no reason for that to be on my permanent record. I will never look at that again. Probably.
There are many reports of Operator not being very good at doing simple tasks and taking too long to do them. At least it doesn’t waste a ton of electricity to browse the web for you.