Understanding How ChatGPT Actually Works

You’ve already heard about ChatGPT. It’s an advanced AI capable of generating text in multiple styles. It can easily pass a college entrance exam or even recreate a paragraph by famed author Stephen King. If you’ve tried ChatGPT you already know what it can do, and are probably blown away. But just how does ChatGPT work? Is this the dawn of sentient machines?

Looking deeper into how the software works uncovers interesting details. The fact is that although seeming near-human in its intelligence, ChatGPT is more of an advanced search engine than anything else. Let’s take a closer look at the powerful tech behind the sensational new chatbot.

Understanding The Reality

First and foremost, ChatGPT is indeed impressive, though much of what the software does is entirely misunderstood. While ChatGPT can generate mostly comprehensible text, the software is using pre-existing information rather than generating new information. This means that almost all of the text generated by the chatbot will fail a plagiarism test. Why? Because the information used is being sourced from tens of millions of articles, books and other online sources.

What ChatGPT actually does is search for answers to a query, then attempt to appropriately blend the information into paragraphs. The fact that the software can so accurately blend data is great, though it should never be forgotten that the information is taken from an existing source. So although ChatGPT can indeed technically pass a college entrance exam, it would still be outright denied entry due to massive plagiarism issues. As far as potentially being customer support at an NZ online casino is concerned, however, it doesn’t get much better than ChatGPT.

Word Prediction Technology

As far as the tech itself is concerned, ChatGPT uses a Large Language Model (LLM.) LLM are especially impressive, given that they are able to make rapid calculations about the relationships between words in large bodies of text. Previous language models were often vague and confusing, producing text mashed together with little to no refinement.

Needless to say, ChatGPT is the current peak of what an LLM system is capable of. There are, however, still flaws in the system that can’t be ignored. The biggest and most obvious flaw is that ChatGPT will often generate false or partly untrue answers.

ChatGPT Is Very Fallible

As with all AI systems, ChatGPT is completely incapable of determining truth from fact. Since the system simply retrieves information for a query and doesn’t determine if the information is correct, false answers are common.

A number of companies and various apps are already incorporating ChatGPT, eager to jump on a runaway trend. However, it goes without saying that there is already more confidence in the tech than there should be. Which isn’t to say that ChatGPT isn’t enormously useful, it is. But relying on the tech so confidently isn’t particularly smart. Given that there is a chance that generated answers may be wrong, at the moment the tech is best thought of as a novelty at best. At least for the time being.

Tags: