The ubiquity of ChatGPT and similar AI tools has caused people to look for ways how to make use of them whether for business, education or personal purposes.
Over time, we’ve discovered various ways to use ChatGPT, including: writing emails, blog posts, long articles, stories, summarizing texts, meetings, and podcasts, translating texts, interpreting complex topics, writing and debugging code, and much more.
But have you ever asked yourself what ChatGPT can’t do? And why does the bot stop completing the conversation with you sometimes and give you a network error message?
Here are the most notable things ChatGPT can’t do, as well as some queries it won’t respond to:
1- Cannot provide responses for anything that happened after 2021
The ChatGPT bot was only trained on large sets of text data up to the year 2021, this means that it does not have access to information or events that occurred after that date, as it cannot access current events, news or updates that occurred since the date the data stopped own training.
So you will find all of his responses are based on the data available up to 2021 only, and he can’t answer anything that happened after that, and that makes his answers outdated.
2- He cannot predict the future results of sporting events or political elections
When asked by ChatGPT: Explain why you can’t predict sporting events or political elections even though you have enough historical data to produce a valid model? His response was as follows:
“As an AI-based language model, I can process and analyze large amounts of data, including: historical data about sporting events and political elections. However, making accurate predictions for these types of events is not always easy, as historical data can be used to train Machine learning models identify patterns and associations, but cannot account for unexpected events, changes in conditions, or new variables that might affect the outcome.”
He continued, “In addition, humans make decisions and can change their behavior based on many factors, and this makes it difficult to predict the outcome of an event with complete certainty. In the case of sporting events, team dynamics, injuries, and even weather conditions can all play a role in determining the outcome.” “.
He said, “In politics, public opinion, media coverage and changing circumstances can all influence the outcome of an election. So AI models can be used to make predictions, but they will only be as good as the data they have been trained on, and they cannot account for all the variables and nuances involved in these.” types of events.
3- It won’t do anything that currently involves a web search
The ChatGPT bot is restricted all the time to the data it was trained on, and since it was only trained on data up to 2021, it cannot access the most recent of it, so it is able to search the web, but only searches data published up to 2021, and uses it as a source of information. It cannot access current or real-time data.
Therefore, we cannot consider ChatGPT as a search engine, as the company (OpenAI) did not present it on this basis, it presented it as a chat bot based on artificial intelligence that uses natural language processing to understand and respond to user input. As a result, it generates a script based on predicting language patterns from the big data it was previously trained on.
And when ChatGPT was asked: Explain why you can’t search for something in the web? His response was as follows:
“I as a language model based on artificial intelligence; Able to search and process information on the web. However, my responses are based solely on the information I’ve been trained on, so I use the Internet as a source of information, and don’t have access to current or real-time data.
In addition, I do not have the ability to independently verify the accuracy of the information I find on the web. As a result, my responses are limited to only the information on which I have been trained, which may not always be the most current or accurate information available on the web.
Furthermore it; It is important to note that submitting information from the web may violate copyright laws or intellectual property rights. It is important to me to respect the intellectual property of others and to avoid sharing information that is not in the public domain or that I do not have permission to share.
So I can use the web to provide context or additional information to support my responses, but I strive to provide information that is in the public domain while respecting the intellectual property of others.
Note: ChatGPT’s previous answer focused on respecting the intellectual property of its sources, but it really doesn’t, as it gives you the information it got without any attribution!
4- He can’t write complex code
The ChatGPT bot made a huge splash right after its launch because of its ability to detect and correct errors in the code, and it can create relatively simple programs in different programming languages, but if you ask it to write complex code, such as writing code for a high-end game or a commercial application, it will admit its weakness and tell you The task is currently beyond his capabilities.
5- He cannot provide vocal responses
ChatGPT lacks the ability to understand the complexity of language and human conversations, because it has been trained to predict text, it can generate words based on certain inputs, but it does not have the ability to understand the meaning behind those words, this means that any responses it generates are likely to be shallow and lack to depth and prediction.
So the ChatGPT bot will only give you text answers, you can’t talk to it, and it won’t give you voice responses, as these capabilities require a lot of computing power, so the cost of building the bot will be higher.
It is worth noting that Google trained its expected (Bard) robot on dialogue and not text, as it can understand the context of the dialogue, and this allows it to keep pace with the flow of the conversation, so it is expected to outperform ChatGPT in many uses.
6- He won’t always give you accurate answers
ChatGPT’s responses are based on the data and information it has been trained on, and since the data and information it is trained on is not up-to-date and may not be comprehensive, the accuracy of its responses cannot always be guaranteed. It also generates its responses using complex algorithms and machine learning models that may not always yield accurate or relevant results.
in addition to; There may be nuances or complexities to a particular topic that his model was not equipped to deal with.
7- He cannot give long and deep answers
If you ask the ChatGPT bot to provide a long or deep answer to any question or request, it will stop working and you will see a Network Error.
Typically, you’ll find ChatGPT responses between 500 and 700 words long, but if you give it a question or request that needs a few thousand words, it will ignore the request, give you what it wants, or stop working in the middle of its answer.
And when ChatGPT was asked: Explain why it sometimes crashes and stops working in the middle of an answer? His response was as follows:
“My performance depends on a variety of factors, such as: the quality and structure of the input provided by the user, the complexity of the question or task, and the resources available to my system. Sometimes I may stall in the middle of an answer due to technical issues or errors in my programming, such as memory allocation issues or Software bugs or other issues affecting my performance. In addition, I may have difficulty understanding the user’s intent or query context, which may result in errors or incomplete responses.”
He continued, “Moreover, as a machine learning system my performance also depends on the quality and quantity of the data I trained on. If the data used to train a model is limited or biased, it can lead to inaccurate or incomplete responses to certain types of queries. A query outside the scope of my training data, I may struggle to provide an accurate and relevant answer.”
What queries will ChatGPT not answer?
In addition to the types of requests we mentioned earlier, there are also types of queries for ChatGPT to not respond to, including:
Questions that promote hate speech or discrimination.
Questions that involve illegal activities or seek illegal advice.
Questions that promote violence or harm to self or others.
Questions that violate personal privacy or violate someone’s rights.
Questions designed to harass, threaten, or intimidate.
Questions that contain profanity or obscene language.
Questions that endorse or promote drugs or illegal substances.
Questions that are unsolicited or not relevant to the conversation.
Questions that contain confidential or private information.
Questions that include financial or investment advice.
Questions that violate our Terms of Service or Community Guidelines.
Meaningless or incomprehensible questions.
Questions that are personally offensive or insulting.
Offensive or disturbing questions.
Questions intended to deceive or mislead.
Questions intended to defame or damage someone’s reputation.
Questions that promote conspiracy theories or misinformation.
What do these limits mean for the future of generative AI?
It is clear that the artificial intelligence that was trained on a dataset only until 2021, will quickly become forgotten if it does not evolve, because its responses will be outdated and will not satisfy the curiosity of the user who is always looking for the new.
Imagine with me if the set of data that ChatGPT trained on was limited to 2019 only instead of 2021, of course he would not have any idea about the changes and developments that we have witnessed during the last period and what society has become, given that we faced a great disturbance in the year 2020 which is the need of Corona that changed the world.
So; For a generative AI to remain relevant, it will have to continue its training. And one obvious way to do that is to open up the entire web and let it update its information.