John Nash, a famous American mathematician, is known for the Nash Equilibrium, a fundamental concept in game theory. He contributed to many theories and works in mathematics and was awarded the Nobel Prize for his efforts.

Those who have seen the movie “A Beautiful Mind,” in which Russell Crowe portrayed John Nash, know that he suffered from paranoia and schizophrenia. Despite his genius in mathematics, he often “hallucinated” and lived in “delusions.” In the early stages of his diagnosis, he was on psychiatric medication and frequently hospitalized. However, from 1970 until his death in 2015, he neither took medication nor was hospitalized again.

It was common for him to face mockery, rejection, and fear due to his mental health issues, but his work and contributions had many applications and benefited humanity.

Chat GPT, or LLMs (Large Language Models) in general, are AI models trained on a vast amount of data and information to perform specific tasks in text processing. Their function is to predict the next word in any text. The quality and realism of the generated text depend on many factors, most importantly, the characteristics and quality of the training data. If trained on mostly fabricated data, the responses will also be predominantly fabricated, as per the rule “Garbage in, garbage out.”

When Arabs try to learn something new, they find that English sources are much better than Arabic ones (unfortunately), and that Arabic content on the internet is weak and full of nonsense.

If the Arabic data the language model was trained on is full of errors and nonsense, don’t expect it to distinguish between truth and nonsense on its own. Naturally, its responses will also be filled with nonsense.

But what if the model is trained on a massive amount of data, enabling it to excel in a wide range of knowledge and sciences, and pass more than 30 official exams with near-excellent grades? Yet, at the same time, it doesn’t know how to say no, and sometimes spouts nonsense for one reason or another. Moreover, the field is still in its infancy, and its rapid rate of development makes it hard to predict what it will be capable of in the coming months.

Humans are statistically distributed under a normal distribution curve. If something revolutionary like LLMs and Chat GPT appears with its applications and significant impact on our lives, about 2% of people at the far right of the curve will defend it fervently and extremly believe in its potential to create a utopia. Similarly, at the far left, people will doubt, attack, and ridicule it. The remaining 96% fall in between, seeing it as a technology with its pros and cons to varying degrees.

If everyone had been overly optimistic and believed in everything John Nash said, we would all be living in his hallucinations now. And if everyone had ridiculed and dismissed him, no one would have benefited from his work in the field where he was a genius. But because the majority try to use their judgment, taking the good and discarding the bad, we benefited from his work, and he received treatment and stabilized.

Humans will always fall under this distribution. There will always be extremists in skepticism and belief, with the majority spread in between. But each person’s fate and life are always tied to their individual decisions and awareness, which determine their place under the curve.

Chat GPT, advanced language processing models, and AI in general, are just another form of John Nash’s story – same content, different form and details. They are here to stay, will evolve further, become more accurate, and acquire even more powerful capabilities. Your decision on how to accept and deal with this will shape your future life. Try to find your place in the middle of the curve, knowing what to believe and how to benefit from it, and when to criticize and reject.

General advice for using Chat GPT: learn how to ask and use it (Prompt Engineering), and critically think about the responses it gives you, especially if the context requires it. But outright rejecting the results and attempts to mock and doubt them will not harm the technology or prevent its development. It will harm you by keeping you behind in using it to your advantage. The model won’t be upset if you mock it and share screenshots of its hallucinations; you will be the one who’s upset.

In conclusion, these are just some thoughts I’ve been having and wanted to share, even though they might not be well-organized enough.

Check Also

How is OpenAI changing the global map of the tech industry?

A Necessary Introduction✅ One of the greatest concepts being applied in computer science i…