The Internet Was Ate in the Year A.I.

Let's call 2023 the year a lot of us figured out how to interact, fabricate, deceive, and work together with robots.

With the release of ChatGPT, an app by OpenAI that allows users to have uniquely human conversations with computers, a little over a year ago, it appeared as though people were starting to realize the potential and risks of artificial intelligence. The chatbot reached a million users in five days. It reached 100 million monthly users in just two months—a figure that has since almost doubled. You could call this the year that a lot of us figured out how to interact, fabricate, deceive, and cooperate with robots.

Soon after the release of ChatGPT, Anthropic unveiled Claude, a "next generation AI assistant for your tasks, no matter the scale," Microsoft integrated OpenAI's model into its Bing search engine, Meta debuted LLaMA, and Google unveiled Bard, their own chatbot. All of a sudden, the Internet appeared almost alive. It wasn't that artificial intelligence (A.I.) was novel in and of itself; in fact, we hardly even notice it when a credit card company flags suspicious activity, Netflix makes movie recommendations, or Amazon's Alexa gives us a rundown of the day's events.

Chatbots, on the other hand, are receptive and spontaneous, whereas those A.I.s operate in the background, frequently in a scripted and fragile manner. They are also erratic. They frequently create things that did not exist before, seemingly out of thin air, when we ask for their help, probing them with questions about things we don't know or seeking for creative assistance. The language used in poetry, literature reviews, essays, research papers, and three-act plays is straightforward and distinctly human. It appears as though the machine's god was created in our likeness. You might receive a line like this if you ask ChatGPT to compose a song about self-driving cars in the vein of Johnny Cash:

I'm never alone when I'm riding alone.

Steady as a stone, I have my AI ridin' shotgun.

Under the enormous sky, on the never-ending road,

Footprints of the past, a ghost driver behind the wheel.

Chatbots like ChatGPT make our smart devices sound dumb, even though they are unlikely to win many awards—at least not yet. They can swiftly summarize complex legal and financial documents; they are beginning to diagnose medical conditions; they are proficient in coding languages in addition to foreign languages; and they can pass the bar exam without any study time. Conversely, we might be led to believe that A.I. models are genuinely intelligent, as opposed to being created that way, and that they comprehend the significance and ramifications of the content they present. They don't. The linguist Emily Bender and her three co-authors refer to them as "stochastic parrots." It's important to remember that a significant portion of human intelligence had to be absorbed by artificial intelligence before it could be deemed intelligent. Furthermore, robots needed to be taught how to work with humans before we could teach them how to collaborate with them.

We had to learn new terms, such as "large language models" (L.L.M.s), "neural networks," "natural-language processing" (N.L.P. ), and "generative artificial intelligence," in order to even begin to comprehend how these chatbots operate. The general outline is now clear: chatbots crawled the Internet and used a type of machine learning that mimics the human brain to analyze it. They then statistically connected words based on which words and phrases usually go together. As demonstrated when chatbots "hallucinate," artificial intelligence's sheer inventiveness is still largely mysterious.

For instance, Bard at Google invented details about the James Webb telescope. The singer Billie Eilish was required to perform at the 2023 Super Bowl halftime show by Microsoft's Bing. An attorney whose federal court brief was discovered to be full of fake citations and made-up judicial opinions supplied by ChatGPT said, "I did not comprehend that ChatGPT could fabricate cases." (The $5,000 fine was imposed by the court.) ChatGPT admits in the fine print that it might not be trustworthy, saying that "ChatGPT can make mistakes." Think about verifying crucial information. Strangely, a recent study indicates that ChatGPT has become less accurate when asked to complete specific tasks over the past year. Researchers speculate that this may be related to the training data, but this is just speculation as OpenAI has not disclosed the training data used to train its L.L.M.

Even with the knowledge that chatbots are fallible, some of the most enthusiastic early adopters of chatbots are still high school and college students, who use them for research and writing papers, solving problem sets, and writing code. (A student of mine noticed that nearly every laptop in the library was open to ChatGPT during finals week in May of last year.) In a recent Junior Achievement survey, over 50% of young people stated that they believed using a chatbot to assist with homework amounted to cheating. However, almost 50% stated that they planned to use it.

Administrators at the schools were equally torn. They were unable to determine whether chatbots are learning aids or agents of deception. ChatGPT was prohibited by New York City Schools Chancellor David Banks in January. According to a spokesman, the chatbot “does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” as reported by the Washington Post. In a statement made four months later, Banks said that the ban "overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial" and described it as "knee-jerk" and fear-based. Then, a Texas A&M professor made the decision to use ChatGPT to find out which students had been using it to cheat. The professor threatened to fail everyone after the bot realized that everyone in the class had done so. The issue was that ChatGPT was experiencing delusions. (There are other A.I. programs available to identify cheaters; the market for chatbot detection is expanding.) We are all, in a way, that professor, beta testing products that we might misjudge, overestimate, or just not fully comprehend.

Ad copy, sports news, and financial reports are already produced by artificial intelligence. Co-founder and president of OpenAI Greg Brockman cheerfully predicted in March that chatbots would eventually assist in writing movie scripts and rewrite scenes that audiences found objectionable. The Writers Guild of America went on strike two months later, demanding a contract that would shield us all from bad movies produced by artificial intelligence. They felt that any artificial intelligence platform that could generate reliable work across a wide range of human domains might pose an existential threat to creativity itself.

The Authors Guild filed a class-action lawsuit against OpenAI in September while screenwriters were negotiating an end to their five-month strike after convincing the studios to swear off A.I. scripts. They claimed that the business exploited their copyrighted work without permission or payment when it cleaned up the Internet. Given OpenAI's less-than-open policy on sharing its training data, the writers couldn't be certain that the company had appropriated their books, but they were concerned about ChatGPT's early practice of responding to questions about individual books with exact quotes, "suggesting that the underlying LLM must have ingested these books in their entireties."(At this point, the chatbot has been modified to state, "I am unable to supply exact quotes from works protected by copyright.") Nowadays, some companies sell prompts that let customers pretend to be famous authors. Furthermore, a writer who is easily impersonated might not be very valuable.

According to a July report by the literary nonprofit pen America, generative artificial intelligence (A.I.) "supercharges" the spread of misinformation and online abuse, posing a threat to freedom of expression. The report noted, "There is also the potential for people to lose trust in language itself, and thus in one another." These risks now go beyond what is written. Two months after OpenAI debuted DALL-E 2, an engine that converts text into artificial images, Stability AI released Stable Diffusion, a tool that was akin to it. AI-generated art is "vampirical, feasting on past generations of artwork," and some have even dubbed it "the greatest art heist in history," according to the Center for Artistic Inquiry and Reporting. Even though creating art in this manner can be enjoyable and enchanting, particularly for individuals who lack artistic talent, creating realistic scenes of unrealized events also poses a risk to the veracity of the facts. An AI agent can be prompted by anyone to display a picture of a man stuffing a ballot box or of demonstrators and police fighting. (I can say that because I tried it and the outcomes were largely accurate.)

While attempts are underway to watermark images produced by artificial intelligence, researchers have not yet succeeded in creating a watermarking system that is impervious to commonly used tools and have also been able to append fictitious watermarks to legitimate images. Users can still choose to delete watermarks using OpenAI.

In March, over a thousand technologists—among them Elon Musk and Apple co-founder Steve Wozniak—signed a letter urging A.I. companies to put a six-month hold on developing their most cutting-edge technologies in order to create space for regulations. Part of what it said was:

These were not abstract worries. For instance, it took a research team from IBM five minutes to fool ChatGPT into creating extremely convincing phishing emails. Because generative AI has been used by other researchers to create malware that evades security measures, it could be a useful tool for online criminals. According to Goldman Sachs, artificial intelligence (AI) could soon displace 300 million full-time jobs.

It should come as no surprise that there was no delay and that there was no significant regulation. Rather, the Biden Administration released a lengthy document that reads more like a wish list than an order at the end of October. It is called the "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." It implies that the executive branch is mincing its risks and opportunities with artificial intelligence in a complex balancing act. A week later, OpenAI unveiled a new line of products that included an A.I. model that could read instructions as long as a three hundred page book, a chatbot builder kit, and a product dubbed "copyright shield," which guarantees to cover developers' legal costs if they are accused of violating copyright.

These new tools allowed me to use ChatGPT to build two chatbots: one that lists all the restaurants in a given location that can accommodate specific food allergies and prohibitions, and another that uses them to determine which medications are not safe to take together. While creating these chatbots was easy and intuitive, I had no idea what algorithms were behind them, where their training data came from (and therefore whether I was violating anyone's copyright), or even if the information the chatbots were producing was accurate. Furthermore, I had no idea how much processing power I was utilizing or how my actions might be affecting the environment. Well, they were kind of cool, and the kind of thing people would pay for, anyway.

It's likely that generative AI will keep developing commercially. More and more intricate processes, including hiring, college admissions, psychotherapy, drug development, radiology, and psychotherapy, will be impacted by artificial intelligence. Businesses will incorporate it into the upcoming hardware generation. For instance, Samsung is probably going to include generative AI in its upcoming flagship phones, which it will be revealing in January. According to reports, Sam Altman, the co-founder of OpenAI, has been working with renowned Apple designer Jony Ive to develop "the iPhone of artificial intelligence." Altman recently abruptly left and returned to the position of Chief Executive Officer. In 2023, we might have nostalgic memories of a time before intelligence was marketed as a commodity.

Previous Post Next Post

Contact Form