菜单
  

    called Kismet (see Fig. 1) that engages people in natural and expressive face-to-face
    interaction. An overview of the project can be found in Breazeal(2002a). The robot
    is about 1.5 times the size of an adult human head and has a total of 21 degrees of
    freedom (DoF). Three DoF direct the robot’s gaze, another three controlthe
    orientation of its head, and the remaining 15 move its facialfeatures (e.g., eyelids,
    eyebrows, lips, and ears). To visually perceive the person who interacts with it,
    Kismet is equipped with a total of four color CCD cameras (there is one narrow field
    of view camera behind each pupiland the remaining two wide field of view cameras
    are mounted between the robot’s eyes as shown). In addition, Kismet has two small
    microphones (one mounted on each ear). A lavalier microphone worn by the person
    is used to process their vocalizations.
    Inspired by infant social development, psychology, ethology, and evolution, this
    work integrates theories and concepts from these perse viewpoints to enable
    Kismet to enter into naturaland intuitive socialinteraction with a human and to
    eventually learn from them, reminiscent of parent–infant exchanges. To do this,
    Kismet perceives a variety of naturalsocialcues from visualand auditory channels,
    and delivers social signals to the human through gaze direction, facial expression,
    body posture, and vocalbabbles. The robot has been designed to support several
    social cues and skills that could ultimately play an important role in socially situated
    learning with a human instructor. These capabilities are evaluated with respect to the
    ability of naive subjects to read and interpret the robot’s social cues, the robot’s
    ability to perceive and appropriately respond to human social cues, the human’s
    willingness to provide scaffolding to facilitate the robot’s learning, and how this
    produces a rich, flexible, dynamic interaction that is physical, affective, social, and
    affords a rich opportunity for learning.
    本文对情感和行为的表现在调节社会角色人类和富有表现力的拟人化机器人之间的相互作用,无论是在交往或情景教学。我们目前的科学基础,我们的人形机器人的情感模型和行为的表现,并展示了如何科学观点适应当前的实现。我们的机器人也能够认识到情感通过语音语调的意图,它的实施是灵感的科学发现社区的发展心理语言学。我们首先评估机器人的表现表现在隔离。接下来,我们评估机器人的总体情绪行为(即的情感识别系统的协调,情感和动机系统,和表达系统)作为它的社会与天真的人面对面。R 2003 Elsevier科学有限公司保留所有权利。关键词:人力–机器人互动;情感表达;交际的人形机器人

    1。景区简介 论文网
    善于交际的人形机器人姿态在一个戏剧性的和有趣的转变认为自主机器人控制。传统上,自主机器人作为独立和远程尽可能从人类身上,经常在危险和恶劣的环境中执行任务(如扫雷区,检查油威尔斯,或其它行星的探索)。其他应用包括提供医院的伙食,修剪草坪,或用真空吸尘器清理地板带来自治机器人与人分享的环境,但人类机器人互动在这些–任务仍然是微不足道的。
  1. 上一篇:注射成型模具设计英文文献和中文翻译
  2. 下一篇:大容量精馏塔塔盘英文文献和中文翻译
  1. 情景感知智能汽车英文文献和中文翻译

  2. 情感分析观点挖掘英文文献和中文翻译

  3. C++最短路径算法研究和程序设计

  4. 现代简约美式风格在室内家装中的运用

  5. 巴金《激流三部曲》高觉新的悲剧命运

  6. 浅析中国古代宗法制度

  7. 中国传统元素在游戏角色...

  8. 高警觉工作人群的元情绪...

  9. 江苏省某高中学生体质现状的调查研究

  10. 上市公司股权结构对经营绩效的影响研究

  11. g-C3N4光催化剂的制备和光催化性能研究

  12. NFC协议物理层的软件实现+文献综述

  

About

优尔论文网手机版...

主页:http://www.youerw.com

关闭返回