AI is the Genie….What’s Your Wish?

It’s all in how you ask your question.

AI is the genie in the lamp.

It can grant wishes – but be careful how you state your wish! Just like the literal genie, AI can give you what you want, but probably not the way you want it. It is all in HOW you ask your question, or make your wish.

I’ve now tried over 100 AI-powered programs and applications. Of this, I found 5-7 that were impressive – but none that were as impressive as ChatGPT itself! I found plenty that were a waste of time. Just because AI is being promoted as part of the program, doesn’t mean that it is good. In fact, it failed most of the time.

I find the common reasons where AI performs well, and where it fails.

Transcript

More talk about AI

So over the past few months I’ve tried about 50 different programs and applications that have all been advertised under the AI banner. Some have been extremely impressive, most, not so much. At this point in the industry, with a few exceptions, I can say – don’t let the label of “AI powered”, or “AI created” sway you.  Just because it’s got AI, doesn’t mean that it’s good.

[introduction]

AI is a genie in a lamp.  If you are familiar with the tales of genies, lamps, and wishes, then you know that how the owner of the lamp asks their question determines how literally, or how foolishly the wish is fulfilled. One story the owner of the lamp wishes to have eternal life, which is granted, but the man grows old – forever, having not asked for eternal youth.

In another story, a man asks for a sum of money, only to find out the next day that his son lost his life in a terrible, tragic accident. The son’s employer gives the man the exact sum of money he wished for.

There are three primary lessons lesson in these tales. The first is tragic: many times wishes are made but can be fulfilled in ways that we can’t imagine, either through tragedy, or unintended consequences. The second is foolishness. The wishes are made without thinking and many times, the i=wisher has to use the last wish to avoid consequences, or even ends up in a worse state.

The third, and probably the most prevalent is the ironic.  This involves the literalness of the wishes being fulfilled – the most direct path to fulfillment. The genie absolutely fulfills the wish, but in the most literal sense, which carries disastrous consequences for the wisher. Such as wishing to be the ruler of all the land he could see, and then the genie leaving him on a small deserted island in the middle of the ocean.

The literalness is completely unexpected by the wisher, but completely accurate in the fulfillment. Which brings us to AI.

AI is very similar to the genie. It takes very literal instructions and fulfills them. It fulfills the wishes by accessing the content from billions of documents on the internet. And we can start there.   An AI expert gave an example of asking AI to stop email spam.  To which the AI could see one option as deleting all email that comes into the inbox.

Technically, AI delivered what was asked. But there was an unintended consequence.  The researcher explained that AI will always be literal and always seek the shortest distance to a solution. The genie in the lamp.

How reliable is the content you find online? How trustworthy, accurate, biased, is content that we find? ChatGPT, Open AI, generative AI – that creates content is using a brain full of content from the internet.

But AI is not new. What has made it the center of attention is that for the first time, there was a popular AI- application, ChatGPT, that actually provided great results. It was 75-80% complete – maybe more in some cases! Whereas other AI applications get you about 30-50% of what you want, and then you have to work in the “training of the AI” over time. That’s what brought all of this attention in such a short amount of time. Is that something was actually amazingly accurate and good – in many cases better than a google search in finding information or completing a task.

Back in 2016, Microsoft unveiled an AI chat bot on Twitter, names Tay, under the username TayTweets. It was designed to learn from twitter  Learning from other Twitter users.  I know -great idea, right?  I honestly don’t know how somebody didn’t raise their hand in the meeting and ask, wait, you know this could go horribly wring, right?”  And it did.  In less than 24 hours, Tay was taught to swear, to be racist, sexist, and deny the holocaust.

As a side note, Maybe AI is more like a human than we thought. It takes most internet users a few months to be radicalized to that level.

But fast forward a few years, and we started to see generative AI – images mainly, that would mimic photography. Next was AI-generated people, and they started showing up in LinkedIn! Only a few years ago there were companies creating  millions of fake profiles on LinkedIn using AI-generated images of people.

AI could develop writing prompts and even some content, but it was always limited and poor. It could be spotted as there was not a good grasp on grammatical concepts, or even accuracy. There were some writing applications that could be serviceable, but not replaceable. But with the introduction of ChatGPT and GPT 4, we’ve had a seismic shift in the capabilities of AI. What changes was a few things:

First, the training data. ChatGPT was using trained using billions of documents online – up until about 2019. The sheer amount of data was the first factor. Second, the speed of response. With computing power, the responses of the engine to access, process and give recognizable information was astounding. The next version of GPT4, was trained on even more data. Not just text-based documents, but anything in video, images, and audie recordings were added to the data training of the engine.

Now, here’s where some things, while amazingly impressive in many cases, here’s where they start to show problems.

I’ll start with areas where I am impressed.

First, it’s a time saver for repetitive content. Yes, having GPT come up with a content calendar, or a 12 month email plan along with topics, subject lines, email content is absolutely amazing. I’ve seen it develop short articles that are very well developed. It is surprising when it brings up information that you may have overlooked or forgotten.

Many times, I get better and faster information by going to GRP inseatd of search. Recipies, for example. I donl;t have to wade through paragraphs of terribly written content about someone’s life story – I can get exactly the information I need – immediately.   (funny enough – this is what we said about google 20 years ago…)

Now, here’s why this is enabled. GPT is not “creating” new information.  It is regurgitation. The information already exists in the database, the brain – gathered from billions of documents online.

While the technology to do this is impressive, it also leaves a lot of holes.

I’m not surprised to hear that GPT has based the bar exam or some test.  GPT is not coming up with this information on its own.

Most tests are simply regurgitation of information and facts. If the relevant content exists online, then it is accessed,  processed and regurgitated for the test. That’s not a big deal, and it certainly isn’t real “artificial intelligence.” It’s machine learning – which is simply looking for patterns and reacting to them.

Second, the information that these GPT applications are trained on are the internet. The entire internet – meaning that there is no centralized, authoritative, and expertly cited source – it is a mass source of millions of monkeys typing on keyboards – the good and the bad. It’s all dumped into this brain.

Third, and this has come about as a significant problem, is the lack of citation. If GPT has used training data from a source, but does not cite the source, then what are the rights of the content creator?

In one case, Google’s Bard cited a study that was done by a company, but presented the information as if Google had done the study. Bard showed no indication that the information was from another website! This is happening frequently, as information is being treated as training data, but there isn’t a focus on the citation of sources – Bing however has made an effort to provide sources to the information provided.

Fourth – citations themselves. This is where machine learning shows how it works. It “learned the patterns of cited information, and , when needed, it creates facts, sources and citations to support the information presented, even though none of it is true. However, this shouldn’t be shocking, as this is a great example of machine learning! It has seen billions of uses of citations across the internet, and has “learned” the format that provides credibility, despite creating disinformation and misinformation!

Fifth, there is information, and there is citation, and then there is authority. Algorithms have problems with authority. Even after 25 years, Google struggles to provide the most authoritative websites or information in the results. However, in YouTube, TikTok, Instagram and other social media, it is not the most accurate or authoritative content that is rewarded, buyt the most popular. This is called the power law. Basically, the popular content gets more popular. Or, The rich get richer.  However content that actually presents true, factual, or authoritative information is not rewarded or promoted, as it doesn’t always get the engagement – which doesn’t create revenue, and so on…

In AI, there is information presented, given, or created. Sometimes the information is incorrect. Then we have information that is cited, giving the source, which even this is not always correct and sometimes made up completely. Then, there is authoritative information., Information and sources that that have proven to be reliable, factual, and true. And this is not presented in generative ai results.

Here’s what I’ve learned from trying almost a hundred AI apps and programs. The majority of them are terrible. I’d say about 4 or 5 really impressed me – but not as much as the actual gpt engine impressed me.

I have one that I have kept my account on, because it helps me generate content which I still have to edit! However, it also schedules, manages, and develops areas. I use it because it is a time-saver.

Just because something says AI, does not mean it’s good. Here’s what I mean:

I tried an AI app to bring images to my podcasts – of course the sales pitch is that it would transform my audio-only podcast into a video.

The other feature is that it would create a podcast from a transcript. It wanted me to upload my script and then it would create a video – but then it wouldn’t use my voice – it wanted me to use an AI-trained voice, and not my own.  Ok….

But here’s where things went off the rails.  OK – great example right here.  I used a phrase  – off the rails.  What does that mean to you?

I use it as a metaphor. A train derailment as a metaphor for something not going right, in fact, going horribly wrong.   This is what machine LEARNING has not learned yet.

Here’s the example, in this AI- video creating program, I uploaded a script and had it create videos of one of my past podcasts.  It was doing a fairly acceptable job, until I used the phrase, “the bar is set high” and guess the image it chose to use in the video – a bar with barstools and drinks. Now another option could have been a car for a high jump or a pole vault – but it went with a tavern.

Just like search, what machine learning is going to do is learn from others examples on the internet – how this word or phrase is used, and it will learn from the those uses and apply that to interpreting future concepts.  It will take the context that is used the most and use it.

This is the predictable part of GPT, and why it will always produce the direst, simplest, and most vanilla content – it will always default to most of what it finds as examples online.  The stand-out, the amazing, the wow content – machine learning can’t distinguish this from an instruction manual. And because those most amazing examples are fewer, it won’t come up or give it extra weight – in fact it will do the opposite. It will diminish things that are higher quality, or authoritatively contradictory if they are outweighed by hundreds of more uses that are lesser quality, similar, and incorrect. That’s machine learning.

In another example, an AI- based program proposed that I could create clips of my podcast for promotion.  I tried it and was completely underwhelmed. AI doesn’t know the difference between the intro and a great point made by a guest. Even after tying to quote – teach – unquote this program to recognize good exchange, content or quotes, it simply could not distinguish great guest quotes from normal conversation or intro content.

I guess that’s something only people can do?

TO complete the metaphor, let’s get back on track. Metaphor, nuance, context, colloquialisms, sarcasm – the ways that real people actually communicate – and this is the chasm that we have right now that is the difference between humans and machines.

Machines will provide you with excellent how-to’s and articles on content where there are millions of documents to pull from. Because this content exists by the truckloads.

What is can’t do is create or distinguish wordplay, metaphor, and sarcasm. If you want boring, vanilla content, GPT will deliver.

GPT will give you intern-level content that serves a purpose. Instructional manuals, code, how-to, calendars – do you hear this list and the pattern here?

What it will not do is understand how a twist of a phrase will grab people’s attention and provide a laugh or a nostalgic reference. Because it hasn’t learned how to do that. It can only provide the most literal interpretation possible.

If you want creative, edgy, and personality in your content – get a human. AI can’t do this well – not yet. Humans are still going to write the most compelling content that will resonate with other humans.

GTP will not develop original work – it is only developing the sum of what it has learned – in the proportion to what it has learned, along with some guardrails to keep it inline.

And here’s the issue with using all of this training data from internet.

In 2018, Google was training image recognition software – machine learning. It used publicly available image data from open Images and Image net. People were asked to label the images to describe what they saw. However, the images were largely US and western-centrist. As a result, only images where the bride wore a white dress were recognized as a wedding. Images of weddings from other cultures – where the brides wore other types and colors of dresses – such as an Indian ceremony where the bride is wearing a brightly colored sari, were incorrectly labeled as costumes.

Interestingly, the US makes up a small percentage of the global population, but the image sets used were 90% US and western centric – exposing a major problem in using training data that is limited, or specific to one people group or culture! Without diversity – especially diversity in proportion to actual population or global make-0up, machine learning is deeply flawed and unreliable.

So the training data is the primary consideration when building any machine learning model. And this is what many technology, privacy, and human rights advocates are demanding – transparency into the training models, as they will always have a bias towards the training data that was provided and the bias of those who wrote the rules of the programming.

This is why the machine learning content will always follow a “safe” path, why it will not be original or inventive – it can only recreate what it has been taught.

An excellent example was posted on LinkedIn this week:
Tom Morton asked Chat GPT to write catchy headlines to sell hot cross buns. Here are the typical responses it provided:
“Get Your Hot Cross Buns Here – Fresh and Delicious!”
“Easter Treats Just Got Better with Our Hot Cross Buns”
“Warm Up Your Morning with Our Hot Cross Buns”
“Limited Time Offer: Hot Cross Buns Now Available!”
“Our Hot Cross Buns Will Leave You Wanting More”

Then he showed what a human copywriter developed for a press ad for Tesco:  “Go Out With All Buns Glazing.”

Humor, wordplay, nuance, sarcasm, metaphor, alongside cultural, literary, and nostalgic references. These are the tools that – in the hands of a human – create amazing content, write heartbreaking stories, and motivate us to greatness. It will be a long time before machine learning learns to do this.