SYLLABUS SECTION: GS III (Science and Technology)
WHY IN THE NEWS?
Google’s advanced conversational agent called Language Model for Dialogue Applications (LaMDA) has a neural network making it capable of deep learning in the new google chatbot sentient.
LANGUAGE MODEL FOR DIALOGUE APPLICATIONS (LaMDA)
- ‘Language Model for Dialogue Applications’, is Google’s advance conversational agent with a neural network capable of deep learning.
- LaMDA is a non-goal-directed chatbot that dialogues on various subjects.
- It has the potential to revolutionize customer interaction and AI-enabled internet searches.
- LaMDA relies on pattern recognition, not empathy, wit, candor, or intent.
NEURAL NETWORK
- A neural network is an AI tech that attempts to mimic the web of neurons in the brain to learn and behave like humans.
- The artificial neural network (ANN)requires training as a pet dog prior to being commanded.
- With access to big data and a powerful processor,
- It is enough for emerging deep learning software to do impossible-looking tasks.
LaMDA VS OTHER CHATBOTS
Chatbots like ‘Ask Disha’ of the IRCTC, routinely used for customer engagement,
They have a repertoire of topics, and chat responses are narrow.
The conversations are predefine and mostly goal-directed.
Related Facts:
· Turing was the pioneer of the world’s first computer, ENIGMA, which broke the German codes during the Second World War. The first chatbots · Joseph Weizenbaum of the MIT Artificial Intelligence Laboratory built ELIZA with which we could chat with the new google chatbot. · Linguist George Kingsley Zipf improved ALICE’s response repertoire by analyzing user chats, making the fake conversions look real in the 1930s.
|
ISSUES WITH THE AI TECHNOLOGY
- Unethical AI carrying historical bias further and enabling easy hate speeches are the real dangers.
- Recently Google fired Timnit Gebru over her warnings on their unethical AI and now this development rightly caused ripples in social media.
- ALICE (Artificial Linguistic Internet Computer Entity) was developed by Richard Wallace, capable of simulating human interactions.
- The issue of equity and equality in future benefit programs may put women and marginalized communities in the discriminated arena.
- AI tech learning from historical data may inadvertently perpetuate discrimination let alone the issue of bias which we are ignoring.
- While making progress in these fields we need to balance the human-machine interface to save ourselves from falling prey to this necessary evil of the modern age.
Read more: UPSC CURRENT AFFAIRS
Source: The Hindu