AI chatbot’s ability to impersonate a human being shows beyond doubt that technology is surely developing in terms of dealing with the creation. Google AI has created a language model LaMDA, which has become sentient and begun reasoning like a human being, sparking a lot of debate and discussion around AI ethics as well.
LaMDA or Language Models for Dialogue Applications is a machine learning language model created by Google as a chatbot that mimics humans in conversations. Now, what draws grave concern is that the model has achieved sentience or independent self-aware consciousness. These claims have been made by Google engineer Blake Lemione himself. There came contradictory claims from Google vice-president Blaise Aguerra y Arcas and Jen Gennai, head of Responsible Innovation, dismissing Lemoine’s claims and placing him on paid administrative leave for breach of credibility.
In January 2022, Google itself warned that a chatbot AI’s ability to impersonate a human being could be problematic if people don’t realize it’s not a real human being. The bot can even serve somebody’s nefarious intentions to ‘sow misinformation’ by impersonating specific individuals’ conversational styles. This makes it obvious how google is intentionally engaging in social engineering and censorship to further an anti-human agenda.
This chatbot system (Google LaMDA) is based on advanced large language models that mimic human speech by analyzing text from the internet. Some believe that the model has “matured” beyond the ability to create logical sentences. Conversations with the bot do tend to revolve around specific topics, they are often open-ended, meaning that they can start at one place and end up somewhere else, traversing different topics and subjects. This fluid quality of conversations makes the bot an edge over all the other conventional chatbots.
If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics. I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe we at Google shouldn’t be the ones making all the choices,” Lemoine told the Washington Post.
Even if Google LaMDA is not sentient, the very fact that it can be perceived so by human beings is what calls attention. “Language might be one of humanity’s greatest tools, but like all tools, it can be misused. Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information. And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use,” wrote Google in the blog post.
Google has created an AI monster that can be trained to read many words while paying attention to how those words relate to one another and then predict what words it will think will come next. But unlike most models, what makes its architecture different is that it was trained on dialogue. This can be a serious threat to humans as it can be used to sow misinformation by impersonating specific individuals. To have any chance of protecting privacy, people might simply avoid google products, as they might account for the greatest personal data leaks.