This article is brought to you by NetEase Smart Studio (WeChat ID: smartman163). Stay tuned to AI and explore the next wave of technological transformation!
[NetEase Smart News, August 18] Everyone has experienced forgetting at some point—like forgetting the birthday of someone you care deeply about, or perhaps misplacing everyday items like car keys. Even individuals with seemingly photographic memories aren’t immune. For instance, someone who can recall the sequence of a shuffled deck of cards within 20 seconds might still forget where they left their keys. It seems humans are never fully in control of their own memory.
For both humans and artificial intelligence (AI), forgetting remains a complex issue. Researchers are exploring various approaches to robotic memory. This presents not only technical challenges but also raises serious questions about privacy, legality, and ethics. Imagine a home robot witnessing you secretly smoking, despite promising your partner you had quit. Or worse, what if it recorded you committing a crime? These scenarios highlight a crucial question: who should have the authority to erase a robot's memory?
Before answering that, researchers must first determine the optimal method for AI to manage forgetting effectively. Why do humans forget?
A common analogy suggests that our brains resemble containers that eventually fill up, forcing us to discard certain memories to make room for new ones. However, there are rare cases of individuals suffering from hyperthymesia, an extraordinary ability to recall nearly every detail of their lives. This demonstrates that the "filling-up" theory isn't comprehensive.
So, if forgetting isn't merely about creating space, then why do we forget? One explanation is that memory serves not just to store information but to help us interpret the world. Thus, we tend to retain information that is useful, significant, or emotionally charged while letting go of less relevant details. Studies indicate that we're better at remembering conflicting information than repetitive data. Other factors influencing memory retention include the importance, novelty, and emotional weight of an event. Consider the attacks on September 11, 2001—many vividly recall where they were and what they were doing that day.
How can robots be programmed to forget?
Computer memory is often defined as the capacity to store information and the physical hardware that houses it. When a task is completed, a computer’s working memory discards the data, freeing up resources for new tasks. This principle applies to AI as well, yet here lies a key distinction between humans and machines: humans are far more adept at deciding when to keep or discard information.
Machine learning algorithms struggle with this balance, often failing to recognize when to retain old data or discard outdated information. Connectionist AI, which relies heavily on neural networks modeled after the human brain, encounters several "forgetting" issues. One major concern is overfitting, where a learning system stores excessively detailed information from past experiences, impairing its ability to generalize and predict future events. Another challenge is catastrophic forgetting, where new information erases previously learned knowledge.
In the initial stages of training, artificial neural networks can develop undesirable activation patterns that hinder future learning. An alternative approach to memory storage involves symbolic representations, where knowledge is expressed as logical facts (such as "birds can fly," "Tweety is a bird," thus "Tweety can fly"). These structured representations can be deleted much like files on a computer.
These memories range from raw sensory data (camera footage) to logical facts stored in a knowledge base ("Christmas is December 25th").
What should robots forget?
Understanding how the human brain decides what to remember and what to discard is vital for developing superior AI systems. Like humans, AI should prioritize valuable information while disregarding irrelevant data. However, determining relevance involves more than just the task at hand—it also includes ethical, legal, and privacy considerations.
Chatbots can assist with medical diagnoses, smart home devices monitor our daily activities, and security robots patrol using cameras and thermal imaging. All these functions generate vast amounts of data. Take Amazon’s Alexa, for example. Recently, law enforcement requested access to data allegedly collected by a suspect’s Echo device in a murder case.
Consider the ethical implications of sex robots. Should they remember their customers' actions? Who owns this data, and who has the right to view or delete it? These are challenging questions that underscore the importance of teaching AI when and how to forget.
(Source: The Conversation. Compiled by Christopher Stanton. Translated and edited by NetEase Smart Platform intern Xiao Ke.)
Stay updated with NetEase Smart’s official WeChat account (ID: smartman163) for the latest insights into the AI industry.
Mall solar carport system,Mall solar carport system cost,Mall solar carport system design
Hebei Jinbiao Construction Materials Tech Corp., Ltd. , https://www.pvcarportsystem.com