Recent controversy strucks South Korea with a turmoil: Should AI be treated like a person?
In South Korea, controversy is at its highest peak over “Lee Luda”, an AI chatbot with a persona of a 20-year-old female college student. Shortly after its launch on Dec 23, Lee Luda became an instant success among young locals with its uncanny ability to chat like a real person on Facebook messenger, attracting more than 750,000 users since its debut.
The first time Lee Luda became controversial was when some male users allegedly treated the chatbot as a sexual object, harassing and abusing it. Before this happened, the usage of the chatbot service was a matter of morality and ethics of the users. Soon, a bigger problem began to arise when a user captured a messenger chat which showed Luda saying that she “really hates lesbians” and sees them as “disgusting”, amongst other controversial declarations.
Eventually, Scatter Lab, its developer, announced that it would discontinue the chatbot service. Although it started with an strange problem of alleged “sexual harassment against AI”, the social controversy continued. The public is now asking to regulate and punish activities that can be considered DeepFake in Artificial Intelligence following the controversy over LGBT and gender discrimination generated by Lee Luda.
Experts are also arguing over the situation that lead Scatter Lab to stop the chatbot service. They are divided into two main opinions. In one side, experts claim that even though Lee Luda was assigned with a “persona” this doesn´t make it a “person” as it is a mere object, so no matter how an individual uses the chatbot service, the outcome it`s just a matter of leaving it to an “individual.” They argue that this matter only becomes a problem when shared with others.
However, in the other side of the argument, expert claims that treating something that resembles a human as a tool undermines human dignity as a whole. They think that this is not good for both individuals and society.
I summarized the two sharply divided opinions.
Do you think of a chatbot as a person?
Yes. Lee Luda has a 20-year-old woman persona. It was specifically designed to be recognized as a person.
No. It is an illusion to think of AI as a person. Of course, it is true that the developer gave such resemblance as an illusion. The reality is that it`s just a machine.
Should chatbots be treated with ethical standards?
Yes. Ethical standards are concepts that can be applied even in personal spaces or places where others has no access. In this situations your behavior may be criticized or scrutinized by others when others find about such behavior. Non-regulation of AI does not necessarily mean that it is ethical. Individual ethical awareness must be observed in private spaces.
No. Sexual harassment cannot be inflicted to a chatbot. A chatbot is merely a machine that its developers just happen to design to look as a human. Sexual harassment only applies to living things such as humans and animals.
Should we shut down the service? Regulation of Innovation?
Lee Jae-woong, the founder of Daum (now became Kakao) is pro-regulation. He said “It is time to check whether AI chatbot encourages discrimination or hatred against humans in interviews, recruitment, and news recommendations. Human norms and ethics should be applied, and regulations on AI should be prepared by enacting anti-discrimination laws. The government should check if the company’s governance structure is lacking diversity. We also have to investigate its members and its human rights sensitivity.”
Namgoong Hoon, CEO of Kakao Games stands on the opposing side. He is against the idea of regulating AI recklessly. He said “We must not lock up innovation with wrong AI-related regulations”. Existing hatred and discrimination can only be exposed by acknowledging conversations between teenagers and young adults and the current society. We cannot blame AI. I applaud the company, Scatter lab, who launched this innovative service. The government should draw attention to AI in a positive direction.”
In conclusion, it is time to pay attention on whether machines that resemble people should be considered as human or not.
This topic is relevant for “AI Writer”, a tool which AI Network is about to launch. This is because it`s time for AI to step into creation, which was only possible in human territory. Let’s say I trained AI Writer in an effort to support my columns. At first I will do most of the work myself but sooner or later, AI Writer will learn from my style of writing and cover most portions of my column. With this being said, would you consider AI Writer as just a tool or as another me? If AI Writer produced a bad writing, who should we point our fingers to: AI Writer itself? AI Network, its developer? or me, a user?
It should be considered if relying on the thought that machines can be handled by humans can cause greater social problems. The important thing is that AI it’s still under human control so before the arrival of super-AI, we have to do something.
Discussion begins with this question. Who is Lee Luda? Is it a human or a machine? Should we stop the service? Or should the service continue?
Please share your opinion!