Was macht unser Gehirn so flexibel? Welche Mechanismen ermöglichen uns, so scheinbar mühelos die Unmengen an sensorischen Information zu verarbeiten, die jede Sekunde auf uns einströmen? Wie etabliert man eine effiziente und adaptive Kommunikation zwischen Mensch und Maschine?
Mit einer öffentlichen Vortragsreihe präsentieren und diskutieren wir allgemeinverständlich interdisziplinäre Ansätze in der Hirnforschung. In einem Wechsel aus internationalen, nationalen und lokalen Beiträgen stellen wir ein buntes Spektrum an Forschungshighlights aus Bremen und "umzu" vor.
Co-constructing understanding – learning how to learn from infants as a new approach for teaching robots?
Prof. Dr. Britta Wrede
Will computers and robots be able to understand our world the way we humans do? From the beginning of their endeavor AI researchers have discussed if it is possible for a computer to become “…a mind, in the sense that computers can be literally said to understand and have other cognitive states” (Searle, 1980) – also known as “strong AI” - or if AI is just “… the science of making machines do things that would require intelligence if done by men” (Minsky), i.e. just simulate behavior that appears intelligent – also known as “weak AI”. Since then, many new arguments have arisen and new perspectives been formulated, pertaining among others to the scope of AI (narrow vs general), the need for embodiment, cultural emersion, or social interaction.
In my talk I will argue that in order for robots to be able to understand our world the way we do they need to be able to learn from our explanations just as children do. More specifically, they need the ability to jointly co-construct meaning by interacting with humans. Co-construction refers to a bi-directional process where mutual scaffolding and monitoring take place, generally in a task-oriented situation with a joint goal. I will present some results from our research on scaffolding in parent-child interactions and their modelling in HRI, enabling the robot to make sense of scaffolding behavior. More recently, for example in the TRR/SFB 318 “Constructing Explainability”, we are investigating how computers and robots can make their understanding of the world and the interaction transparent and meaningful to the user by explaining and other strategies, addressing attentional, emotional and many other strategies.
Our goal is to close the loop of scaffolding and monitoring to reciprocally enable humans and robots to jointly co-construct meaning in interaction. We believe that through these little steps a shared understanding of the world can be achieved between humans and robots.
Mehr Informationen finden Sie hier: