Mythbusting GenAI

Artificial Intelligence & MLOps
Video

Generative AI has become a popular buzz-word, but there is still a lot of confusion around what GenAI actually is and what it is capable of. Join Caylent’s Randall Hunt and Mark Olson as they debunk common myths and misconceptions surrounding GenAI, along with some hot takes.


Mark Olson: So Randall, generative AI is the term of the moment. What are the myths? Hot takes, misconceptions?

Randall Hunt: What do you think AI means?

MO: To me or to the general public? Because to me, I've done a little bit of deeper digging so I've got a little bit of a semi educated take. I'm not a scientist, but for me, it's really about taking the corpus of human knowledge or at least that accessible bit on the internet and turning that into the creation of new content. So giving a little bit of a prompt, heading in a direction and then letting it take off and see where the collected wisdom goes. Now for me, that's interesting because in some cases, that's a place where sometimes I get stuck. I've got half an idea and I don't know where to take it, and that can be a place to unstick me and get me moving. So it's really exciting from that perspective.

On the other hand, I've got other cases where I need something to be relatively smart and well informed and the average of human intelligence may not be what I need to put out. If it's something that's a life critical use case, I'm a little bit worried about applying it in that. So that's kind of in a nutshell, my take on it.

RH: So, there's an interesting fact within these large language models that they're typically made up of trillions of tokens and they have, however many billion parameters they have, but regardless of that, they get pre trained on all these tokens and those tokens come from things like Wikipedia or the common crawl database or stack overflow or -

MO: - The Pile, the list grows. I think people are building their own corpus as they go.

RH: Like LiveJournal, all of the teen angst in the world has been captured by these models. And so you get into it and you can say to the model - these models by default, by the way, they're not written to be assistants, you have to trick them into doing that, you have to trick them into behaving as helpful assistants. At the end of the day, it's just fancy autocomplete, it's autocomplete on steroids. Its only job is to predict the next token to come up with what is next in the sentence. And so to make it act as an intelligent assistant you have got to go in and “Hey, what's next? Pretend you’re this.”

So another common thing that I found is you can tell the model to act as the world's best expert in a particular topic, and the results are actually worse than if you tell the model to behave as if you're an average person with modest understanding of a subject or as if it’s an expert in this topic, and that’s because the set of data that it has access to on the world class best expert, is like zero. The internet doesn't have that information.

MO: Yeah, if you ask to be three or four standard deviations from the mean, you're going to get a sample size, that's three or four standard deviations from the mean. So it doesn't have a lot of data to train on at that edge. So that's super unintuitive because one of the rookie mistakes with prompt engineering is telling the model to be like Stephen Hawking. There was one Stephen Hawking and he did do a lot of writing, but that's not the same. The corpus of writing that he did isn't equivalent to the collective wisdom of everyone on the internet.

RH: So you come into these things and ask what does GenAI mean? And my mom, I'm sure would say, “Oh, it's an AI named Jen.” At the end of the day, these things are not sentient. They are not general artificial intelligence and they are not going to replace all of our jobs. Maybe my job because I'm just a bunch of Lambda functions wrapped in a trench coat. But it's not taking away coders' jobs or anything. At least not any time soon. You'll see these takes on the internet where people say there aren't going to be any coders in five years. Absolutely incorrect. He's actually retracted this, so I don't want to attack him, but it's just an absolutely absurd thing to propose because what's going to happen is coding is going to become more accessible in the short term. It's going to be less hard to learn to code because you're going to have these AI code generators by your side, helping you put together the different lego blocks. There's actually going to be more coders in the short term rather than fewer. It is going to change the way we code.

MO: Well, if it's increasing the efficiency of a developer's work, then you potentially can afford more developers or your dollar per feature if you're a product company goes down. So there's a resulting impact on the bottom line. It's kind of interesting, that there's probably not fewer developers overall, there probably are more because they're more cost effective. It's easier for companies to take that investment on somebody that's got that wealth of knowledge behind them.

RH: Yeah, if I run a bar and I'm not a super technical person, I can pop open CodeWhisperer and I say something to the effect of “Make me a website for the bar and here's links to some images.” It'll go and give me the HTML and then I can say, “How do I upload this?’ And it'll say “Go to S3.” And then setting up the CloudFront side and the certificates and everything might be a little bit beyond what CodeWhisperer can do, but that's how it's going to work.

MO: Well, that's why we still do need a little bit of professional knowledge for some of those intricate use cases. But the common use cases have become a lot easier and become accelerated by that.

I want to go back, you talked about artificial general intelligence and I think people are seeing AI front and center first hand, maybe for the first time because it's been hidden behind a product, it's a feature that just works magically. If you're using an Apple product or Netflix and it's making a great recommendation for you, or if you've got a credit card and Amex has flagged a fraudulent transaction for you. All those things were AI behind the scenes, now that it's coming out and it's front stage and it's working in a conversant way, people see that as just one step away from Skynet, they're ready for it to take over. Just in the way that you said that it's not taking away developer jobs in the next five years. How would you differentiate it between what we're seeing with ChatGPT’s natural language capabilities versus what would potentially be a truly artificial general intelligence?

RH: Well, there's a common conversation around this that I have in bars and on airplanes and to anyone who will listen to me as I shout it from the rooftops, which is, language is the serialization of understanding. The way that we pass knowledge from one person to another, because we lack a brain to brain interface, is through language. It's the way that I say, “Hey Mark, should we hire this person? They're like this, this and this,” and then you say “Yes and this, this and this,” that's how we share and exchange information. A lot of people have what's called an internal monologue, they narrate their own life as they go along. Some people don't, but what happens is this stuff could eventually give rise to some form of sentience because the serialization of understanding is language and sentience is the ability to understand the world around you.

So could it happen? Absolutely. Is it happening? No, not yet. It's insane to even suggest that these things are doing that. If you ask ChatGPT to spell a word, it won’t because it can't spell, it doesn't have the breadth or the capability to go out and expand beyond.

Here's another problem that happens is these models get locked in, they go down one path and they completely lack the ability to go back and trace another path in their cognition. Once they've selected a certain topographical structure through which they're going to navigate their reasoning, they're locked in on that path, there's nothing else they can do. Humans, we can go back, we can say, oh I screwed that up, let me fix it.

MO: We've seen plenty of examples of generative AI chats going wrong where they could just dig a deeper hole once they've been wrong, which is called hallucination. I would say, for people that are watching this video are probably very interested in the kinds of conversations you're having in bars and airports. I'm curious as to how the random stranger that's subject to this in the bars and airports are finding it, but that's a topic for a different day.

RH: No, it’s like the person who keeps calling you about your car's extended warranty? In this case it’s me going “Hello, my name's Randall, have I talked to you lately about generative AI?”

MO: Fantastic. Yes, you have, in fact, with me.

So, we’ve talked about job replacement, we're starting to talk a little bit about hallucination, so there's this case where, to your point, the model gets down on one of the branches of its predictive model and it's just wrong.

RH: Across successive prompts and a chat interface, and it's all about the interface in the end, you can tell the model “You were wrong, here's the error message I got.” And it'll say “This is how I fixed that.” But it still hallucinates completely random APIs that don't exist. I remember I was working on something maybe a week ago that was supposed to use the Zoom API to list all of the recordings and download them all. I used an AI code generator to help me do this and it completely hallucinated parts of the SDK that just weren't real and didn't exist. They logically made a lot of sense, I kind of wish the SDK did have that in it, but it wasn't real, and when I gave it the error, it didn’t know how to fix it. So then I had to do the normal coding thing. I had to pop over my browser, I had to look up the API, all that good stuff.

MO: Well and also send your assistant’s recommendations to the Zoom product team because it sounds like it might have been better than the API that they've built.

RH: Now, that's an interesting point. What is it going to look like when we use these models to inform API design? That's a nice little area to kind of dig deeper, maybe a different time.

MO: Oh, we've just seeded a product company idea!

RH: “Hey, outsource your DevRel and Dev marketing to this AI!” That'd be pretty cool.

MO: Yeah, why not?

I think another thing, I don't know if it's a myth or a hot take, but I think that the idea that generative AI changes everything for business, it blows up old models, whatever nth industrial wave, those kinds of things. Almost all of that is hyperbolic. Where do you see that as being closer to the truth? Where do you see it as really shifting the sands for companies that have established themselves?

RH: I think it could be something real in individualized education. So, right now, the way that our industrial revolution based teaching system works is that we have a lecturer and we have students, and there's not a lot of one-on-one time between the student and the teacher, who is supposedly the expert in the topic.

Now we suddenly have access to this thing that is not necessarily the world's foremost expert in all of these things, but is above average intelligence on a lot of different tasks and it doesn't get tired, sick, angry, or frustrated in having to explain the same thing for the 100 millionth time. In that regard, I do think that education is ripe for significant changes and advances for individual students. I think the future is bright there.

MO: Nice. That's not the first take that you hear when you're hearing generative AI use cases, so that's an interesting one to bring in.

RH: I'm just a super odd person. I think about the weirdest takes, I'm sorry. What do you think is going to be the thing that actually gets revolutionized?

MO: There's ones that we know that are happening today. We see areas of content creation that are being completely disrupted. That's a relatively obvious take, the writing assistants are sort of reducing the need for the number of content creators. So I would discount that as future facing. That's something that we see happening today.

RH: This entire video generated by AI… not really

MO: With the right prompt, it could be though, or at least with the converged models that we see coming around the corner where you're starting to blend multimedia and text and could potentially prompt this.

RH: With 1 million GPUs, you too can create videos.

MO: Indeed. I think that the disruption, not to take exactly your education example, but internal knowledge bases and just understanding company information when you get to a significantly sized enterprise and even smaller and medium sized enterprises, you generate this corporate knowledge base that takes time to absorb. It takes a long time to be an expert in individual companies and to some degree that ramps with experience. People that have switched jobs a number of times have an idea of how to come up to speed and the questions to ask and where to go. But I think ramping that more quickly, the quicker you can create your own experts, you have more flexibility in terms of workforce. If you can train people more efficiently, you can bring people from varied backgrounds. You don't necessarily have to bring somebody that's 75-95% ready. You could start at 50% and still have them be productive and grow into those roles. So I think that that use case is really interesting in making it easier to become an internal expert.

RH: I like that. And another area, if you take that same concept, because that's sort of in the retrieval augmented generation component, if you take that, you can run with that same thing in contact center scenarios. So generative AI will genuinely revolutionize the contact center experience. If you look at customer support agents for everything from airlines to rental cars to the stuff that we have to interact with in the real world where we need to talk to a real person or we need to talk to an agent that has the ability to go in and make these changes, we can defer a lot of the cost and the staffing of those agents into this generative AI platform that can go out and say “Have you tried this yet?”

But not like Clippy. I think the worst, the most dystopian version of this future is that everything becomes Microsoft Clippy. For the Gen Z in the audience, you probably don't remember that, but Clippy was this hellacious little mascot that existed in Microsoft Word that would just say, “Hey, it looks like you're trying to write a resume. Have you tried this?” That is the dystopian future, everything becoming Clippy. We want the suggestions to be fluid and useful and right there in line.

MO: Well your customer service example is a good one because you can imagine a software platform that's real time translating what the user’s saying to text, making sure that we're searching against the internal company knowledge base and bringing up the top five paths to resolution, maybe the top two or the top one and you're still applying that human customer service rep knowledge to look at whether that’s the right resolution, if we’re still on track with this customer, but really augments their ability to solve customer problems in a way that's seamless and isn't like the Clippy experience. I think that's a good way to transform the customer experience.

RH: Yeah, I like it. I think that will be one that happens soon.

Anyways, we've talked for a little while now and I think we've probably collected enough hot takes for at least one TikTok, and if not, maybe we'll find a model that can go and make more of them for us.

MO: Well, I don't know, I'm a little old for that. I don't know what the length of one TikTok is, so I think we might have gotten a couple of them out of it.

RH: All right, good to talk to you as always

MO: Good conversation, Randall.

_________________

Are you exploring ways to take advantage of Analytical or Generative AI in your organization? Partnered with AWS, Caylent's data engineers have been implementing AI solutions extensively and are also helping businesses develop AI strategies that will generate real ROI. For some examples, take a look at our Generative AI offerings.


Accelerate your GenAI initiatives

Leveraging our accelerators and technical experience

Browse GenAI Offerings
Artificial Intelligence & MLOps
Video
Randall Hunt

Randall Hunt

Randall Hunt, VP of Cloud Strategy and Innovation at Caylent, is a technology leader, investor, and hands-on-keyboard coder based in Los Angeles, CA. Previously, Randall led software and developer relations teams at Facebook, SpaceX, AWS, MongoDB, and NASA. Randall spends most of his time listening to customers, building demos, writing blog posts, and mentoring junior engineers. Python and C++ are his favorite programming languages, but he begrudgingly admits that Javascript rules the world. Outside of work, Randall loves to read science fiction, advise startups, travel, and ski.

View Randall's articles
Mark Olson

Mark Olson

As Caylent's VP of Customer Solutions, Mark leads a team that's entrusted with envisioning and proposing solutions to an infinite variety of client needs. He's passionate about helping clients transform and leverage AWS services to accelerate their objectives. He applies curiosity and a systems thinking mindset to find the optimal balance among technical and business requirements and constraints. His 20+ years of experience spans team leadership, technical sales, consulting, product development, cloud adoption, cloud native development, and enterprise-wide as well as line of business solution architecture and software development from Fortune 500s to startups. He recharges outdoors - you might find him and his wife climbing a rock, backpacking, hiking, or riding a bike up a road or down a mountain.

View Mark's articles

Related Blog Posts

OpenAI vs Bedrock: Optimizing Generative AI on AWS

The AI industry is growing rapidly and a variety of models now exist to tackle different use cases. Amazon Bedrock provides access to diverse AI models, seamless AWS integration, and robust security, making it a top choice for businesses who want to pursue innovation without vendor lock-in.

Artificial Intelligence & MLOps

AI-Augmented OCR with Amazon Textract

Learn how organizations can eliminate manual data extraction with Amazon Textract, a cutting-edge tool that uses machine learning to extract and organize text and data from scanned documents.

Artificial Intelligence & MLOps

Building Recommendation Systems Using Generative AI and Amazon Personalize

In this blog, learn how Generative AI augmented recommendation systems can improve the quality of customer interactions and produce higher quality data to train analytical ML models, taking personalized customer experiences to the next level.

Artificial Intelligence & MLOps