Antropomorfismo en las interacciones entre humanos y robots: una conceptualización multidimensional

Autores/as

Palabras clave:

Interacción humano-robot (IHR), robots sociales, tecnología, Teoría de la Mente (ToM), agentes digitales

Resumen

Dado que los robots asumen cada vez más roles sociales (por ejemplo, asistentes, compañeros), el antropomorfismo (es decir, la cognición que posee una entidad, con características humanas) juega un papel destacado en la interacción humano-robot (IHR). Sin embargo, las conceptualizaciones actuales del antropomorfismo en la IHR no han distinguido adecuadamente entre precursores, consecuencias y dimensiones del antropomorfismo. Al construir y profundizar en investigaciones previas, conceptualizamos el antropomorfismo como una forma de cognición humana, que se centra en atribuir las capacidades mentales humanas a entidades no humanas como un robot.

Descargas

Los datos de descargas todavía no están disponibles.

Citas

Airenti, G. (2018). The development of anthropomor- phism in interaction: Intersubjectivity, imagination, and theory of mind. Frontiers in Psychology, 9, 2136. https:// doi.org/10.3389/fpsyg.2018.02136

Banks, J. (2020). Theory of mind in social robots: Re- plication of five established human tests. International Journal of Social Robotics, 12(2), 403–414. https://doi. org/10.1007/s12369-019-00588-x

Barco, A.; de Jong, C.; Peter, J., Kühne, R. y van Stra- ten, C. L. (2020). Robot morphology and children’s per- ception of social robots. An exploratory study. In Compa- nion of the 2020 ACM/IEEEInternational Conference on Human-Robot Interaction (pp. 125–127). https://doi. org/10.1145/3371382.3378348

Bartneck, C.; Kuli, D.; Croft, E. y Zoghbi, S. (2009). Mea- surement instruments for the anthropomorphism, ani- macy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1(1), 71–81. https://doi.org/10.1007/s12369-008-0001-3

Baumeister, R. (2016). Toward a general theory of mo- tivation: Problems, challenges, opportunities, and the big picture. Motivation and Emotion, 40(1), 1–10. https://doi. org/10.1007/s11031-015-9521-y

Beer, J.; Fisk, A. y Rogers, W. (2014). Toward a fra- mework for levels of robot autonomy in human-robot interaction. Journal of Human–Robot Interaction, 3(2), 74–99. https://doi.org/10.5898/JHRI.3.2.Beer

Benninghoff, B., Kulms, P., Hoffmann, L. y Krämer, N. (2013). Theory of mind in human–robot-communication: Appreciated or not? https://doi.org/10.17185/DUEPUBLI- CO/31357

Bigman, Y., Waytz, A.; Alterovitz, R. y Gray, K. (2019). Holding robots responsible: The elements of machine mo- rality. Trends in Cognitive Sciences, 23(5), 365–368. ht- tps://doi.org/10.1016/j.tics.2019.02.008

Biocca, F.; Harms, C. y Burgoon, J. (2003). Toward a more robust theory and measure of social presence: Re- view and suggested criteria. Presence: Teleoperators and Virtual Environments, 12(5),456–480. https://doi. org/10.1162/105474603322761270

Bjorklund, D. y Causey, K. (2018). Children’s thinking.

Cognitive development and individual differences. Sage.

Blanchette, I. y Richards, A. (2010). The influence of affect on higher level cognition: A review of research on interpretation, judgment, decision making and reaso- ning. Cognition & Emotion, 24(4), 561–595. https://doi. org/10.1080/02699930903132496

Breazeal, C. (2003). Toward sociable robots. Robotics and Autonomous Systems, 42(3–4), 167–175. https://doi. org/10.1016/S0921-8890(02)00373-1

Broadbent, E. (2017). Interactions with robots: The truths we reveal about ourselves. Annual Review of Psy- chology, 68(1), 627–652. https://doi.org/10.1146/annu- rev-psych-010416-043958

Coltman, T.; Devinney, T.; Midgley, D. y Venaik, S. (2008). Formative versus reflective measurement models: Two applications of formative measurement. Journal of Business Research, 61(12), 1250–1262. https://doi.or- g/10.1016/j.jbusres.2008.01.013

De Jong, C.; Peter, J.; Kühne, R. y Barco, A. (2019). Chil- dren’s acceptance of social robots: A narrative review of the research 2000–2017. Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems, 20(3), 393–425. https://doi.org/10.1075/is.18071.jon

Dennett, D. (1987). The intentional stance. MIT Press. Dennett, D. (1988). Conditions of personhood. En M. F.

Goodman (ed.), What is a person? (pp. 145–167). Humana Press. https://doi.org/10.1007/978-1-4612-3950-5_7

Duffy, B. (2003). Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42(3-4), 177–

https://doi.org/10.1016/S0921-8890(02)00374-3

Echterhoff, G.; Bohner, G. y Siebler, F. (2006). Social Robotics und Mensch-Maschine-Interaktion [“Social Ro- botics” and human-machine-interaction]. Zeitschrift für Sozialpsychologie, 37(4),219–231. https://doi. org/10.1024/0044-3514.37.4.219

Epley, N. (2018). A mind like mine: The exceptionally ordinary underpinnings of anthropomorphism. Journal of the Association for Consumer Research, 3(4), 591–598. https://doi.org/10.1086/699516

Epley, N. y Waytz, A. (2010). Mind perception. En S. T. Fiske, D. T. Gilbert y G. Lindzey (eds.), The handbook of so- cial psychology (5° ed., pp. 498–541). Wiley.

Epley, N.; Waytz, A. y Cacioppo, J.(2007). On see- ing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.or- g/10.1037/0033-295X.114.4.864

Eyssel, F. (2017). An experimental psychological pers- pective on social robotics. Robotics and Autonomous Systems, 87, 363–371. https://doi.org/10.1016/j.ro- bot.2016.08.029

Eyssel, F.; Hegel, F.; Horstmann, G. y Wagner, C. (2010). Anthropomorphic inferences from emotional nonverbal cues: A case study. En 19th International Symposium in Robot and Human Interactive Communication (pp. 646– 651). https://doi.org/10.1109/ROMAN.2010.5598687

Eyssel, F.; Kuchenbrandt, D. y Bobinger, S. (2011). Effects of anticipated human–robot interaction and predic- tability of robot behavior on perceptions of anthropomor- phism. En Proceedings of the 6th International Conferen- ce on Human–Robot Interaction - IHR ’11 (p. 61). https:// doi.org/10.1145/1957656.1957673

Ferrari, F.; Paladino, M. y Jetten, J. (2016). Blurring hu- manmachine distinctions: Anthropomorphic appearance in social robots as a threat to human distinctiveness. Inter- national Journal of Social Robotics, 8(2), 287–302. https:// doi.org/10.1007/s12369-016-0338-y

Fink, J. (2012). Anthropomorphism and human likeness in the design of robots and human–robot interaction. En S. S. Ge, O. Khatib, J.-J. Cabibihan, R. Simmons y M.-A. Williams (eds.), Social robotics. 4° International Conference, ICSR 2012, Chengdu, China, October 29-31, 2012, Proceedings (pp. 199–208). Springer. https://doi.org/10.1007/978-3- 642-34103-8_20

Fong, T. ; Nourbakhsh, I. y Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and Autono- mous Systems, 42(3–4), 143–166. https://doi.org/10.1016/ S0921-8890(02)00372-X

Fraune, M. (2020). Our robots, our team: Robot an- thropomorphism moderates group effects in human–ro- bot teams. Frontiers in Psychology, 11, 1275. https://doi. org/10.3389/fpsyg.2020.01275

Fraune, M.; Oisted, B.; Sembrowski, C.; Gates, K.; Krupp,

M. y Sabanovic, S. (2020). Effects of robot–human versus ro- bot–robot behavior and entitativity on anthropomorphism and willingness to interact. Computers in Human Behavior, 105, 106220. https://doi.org/10.1016/j.chb.2019.106220

Fussell, S.; Kiesler, S.; Setlock, L. y Yew, V. (2008). How people anthropomorphize robots. En Procee- dings of the 3° International Conference on Hu- man Robot Interaction - IHR’08 (p. 145). https://doi. org/10.1145/1349822.1349842

Gong, L. (2008). How social is social responses to com- puters? The function of the degree of anthropomorphism in computer representations. Computers in Human Be- havior, 24(4), 1494–1509. https://doi.org/10.1016/j. chb.2007.05.007

Gray, H.; Gray, K. y Wegner, D. (2007). Dimensions of mind perception. Science (New York, N.Y.), 315(5812), 619– 619. https://doi.org/10.1126/science.1134475

Gunkel, D. (2012). The machine question. Critical pers- pectives on AI, robots, and ethics. MIT Press.

Gunkel, D. (2022). The symptom of ethics: Rethinking ethics in the face of the machine. Human-Machine Com- munication, 4, 67–83. https://doi.org/10.30658/hmc.4.4

Guzman, A. (2018). What is human–machine commu- nication, anyway? En A. L. Guzman (ed.), Human-machine communication: Rethinking communication, technolo- gy, and ourselves (pp. 1–28). Peter Lang.

Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social. Psychology Review: An Official Journal of the Society forPersonality and Social Psychology, Inc, 10(3), 252–264. https://doi.org/10.1207/ s15327957pspr1003_4

Haslam, N. y Loughnan, S. (2014). Dehumaniza- tion and infrahumanization. Annual Review of Psycho- logy, 65(1), 399–423. https://doi.org/10.1146/annu- rev-psych-010213-115045

Haslam, N.; Loughnan, S.; Kashima, Y. y Bain, P. (2009). Attributing and denying humanness to others. European Review of Social Psychology, 19(1), 55–85. https://doi. org/10.1080/10463280801981645

Hegel, F.; Krach, S.; Kircher, T.; Wrede, B. y Sagerer, G. (2008). Understanding social robots: A user study on an- thropomorphism. En RO-MAN 2008 - The 17th IEEE Inter- national Symposium on Robot and Human Interactive Communication (pp. 574–579). https://doi.org/10.1109/ ROMAN.2008.4600728

Higgins, E. (1990). Personality, social psychology, and person situation relations: Standards and knowledge acti- vation as a common language. En L. A. Pervin (ed.), Hand- book of personality: Theory and research. (pp. 301–338). The Guilford Press.

Himma, K. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29. https://doi. org/10.1007/s10676-008-9167-5

Ho, C. y MacDorman, K. (2010). Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Computers in Human Behavior, 26(6), 1508–1518. https://doi.org/10.1016/j.chb.2010.05.015

Hortensius, R. y Cross, E. (2018). From automata to animate beings: The scope and limits of attributing social- ness to artificial agents. Annals of the New York Academy of Sciences, 1426(1),93–110. https://doi.org/10.1111/ nyas.13727

Hubbard, F. (2011). “Do androids dream?”: Personhood and intelligent artifacts. Temple Law Review, 83(2), 405–

https://ssrn.com/abstract1⁄41725983

Kahn, P.; Ishiguro, H.; Friedman, B.; Kanda, T.; Freier, N.; Severson, R. y Miller, J. (2007). What is a Human?: Toward psychological benchmarks in the field of human–robot. In- teraction Studies. Social Behaviour and Communication in Biological and Artificial Systems, 8(3), 363–390. https://doi. org/10.1075/is.8.3.04kah

Kiesler, S. y Goetz, J. (2002). Mental models of robo- tic assistants. En CHI ’02 Extended Abstracts on Human Factors in Computing Systems - CHI ’02 (pp. 576–577). ht- tps://doi.org/10.1145/506443.506491

Kiesler, S.; Powers, A.; Fussell, S. y Torrey, C. (2008). Anthropomorphic interactions with a robot and robot-li- ke agent. Social Cognition, 26(2), 169–181. https://doi. org/10.1521/soco.2008.26.2.169

Kim, Y. y Sundar, S. (2012). Anthropomorphism of computers: Is it mindful or mindless? Computers in Hu- man Behavior, 28(1), 241–250. https://doi.org/10.1016/j. chb.2011.09.006

Krausová, A. y Hazan, H. (2013). Creating free will in artificial intelligence. En J. Romportl, P. Ircing, E. Zackova, M. Polak y R. Schuster (eds.), Beyond AI: Artificial golem intelli- gence. Proceedings of the International Conference Beyond AI 2013 (pp. 96–109). University of West Bohemia.

LeDoux, J. (1999). The emotional brain: The mysterious underpinnings of emotional life. Phoenix.

Lee-Won, R.; Joo, Y. y Park, S. (2020). Media equa- tion. En J. Bulck (ed.), The International Encyclopedia of Media Psychology (1° ed., pp. 1–10). Wiley. https://doi. org/10.1002/9781119011071.iemp0158

Lemaignan, S.; Fink, J.; Dillenbourg, P. y Braboszcz,

C. (2014). The cognitive correlates of anthropomorphism. En 2014 Human–Robot Interaction Conference, Workshop “HRI: A bridge between Robotics and Neuroscience.”

Leventhal, H. y Scherer, K. (1987). The relationship of emotion to cognition: A functional approach to a semantic controversy. Cognition & Emotion, 1(1), 3–28. https://doi. org/10.1080/02699938708408361

Liarokapis, M.; Artemiadis, P. y Kyriakopoulos, K. (2013). Quantifying anthropomorphism of robot hands. En 2013 IEEE International Conference on Robo- tics and Automation (pp. 2041–2046). https://doi.or- g/10.1109ICRA.2013.6630850

Lombard, M. y Xu, K. (2021). Social Responses to Me- dia Technologies in the 21st Century: The Media are Social Actors Paradigm. Human-Machine Communication, 2, 29–55. https://doi.org/10.30658/hmc.2.2

Margolin, D. (2021). The theory of informative fictions: A character-based approach to false news and other misinfor- mation. Communication Theory, 31(4), 714–736. https:// doi.org/10.1093/ct/qtaa002

McLeod, J. y Pan, Z. (2005). Concept explication and theory construction. En S. Dunwoody, L. B. Becker, D. M. McLeod y G. M. Kosicki (eds.), The evolution of key mass communication concepts: Honoring Jack M. McLeod. (pp. 13–76). Hampton Press.

Morewedge, C.; Preston, J. y Wegner, D. (2007). Ti- mescale bias in the attribution of mind. Journal of Perso- nality and Social Psychology, 93(1), 1–11. https://doi. org/10.1037/0022-3514.93.1.1

Nass, C. y Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153

Ruijten, P. (2018). Perceptions of human-likeness in hu- man–robot interaction research. Unpublished manuscript.

Ryan, R. y Deci, E. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. The American Psychologist, 55(1), 68–78. https://doi.org/10.1037/0003-066X.55.1.68

Salem, M.; Eyssel, F.; Rohlfing, K., Kopp, S. y Joublin,

F. (2013). To err is human(-like): Effects of robot gesture on perceived anthropomorphism and likability. Internatio- nal Journal of Social Robotics, 5(3), 313–323. https://doi. org/10.1007/s12369-013-0196-9

Scassellati, B. (2002). Theory of mind for a humanoid robot. Autonomous Robots, 12(1), 13–24. https://doi.or- g/10.1023/A:1013298507114

Scholl, B. y Tremoulet, P. (2000). Perceptual causality and animacy. Trends in Cognitive Sciences, 4(8), 299–309. https://doi.org/10.1016/S1364-6613(00)01506-0

Schroeder, J. y Epley, N. (2016). Mistaking minds and machines: How speech affects dehumanization and an- thropomorphism. Journal of Experimental Psychology. General, 145(11), 1427–1437. https://doi.org/10.1037/ xge0000214

Smith, E. y De Coster, J. (2000). Dual-process models in social and cognitive psychology: Conceptual integration and links to underlying memory systems. Personality and Social Psychology Review, 4(2), 108–131. https://doi. org/10.1207/S15327957PSPR0402_01

Stafford, R.; MacDonald, B.; Jayawardena, C.; Wegner,

D. y Broadbent, E. (2014). Does the robot have a mind? Mind perception and attitudes towards robots predict use of an eldercare robot. International Journal of Social Ro- botics, 6(1), 17–32. https://doi.org/10.1007/s12369-013- 0186-y

Sullins, J. (2006). When is a robot a moral agent? Inter- national Review of Information Ethics, 6, 23–30. doi: ht- tps://doi.org/10.29173/irie136

Tan, H., Wang, D. y Sabanovic, S. (2018). Projecting life onto robots:The effects of cultural factors and design type on multi-Level evaluations of robot anthropomorphism. En 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 129– 136). https://doi.org/10.1109/ROMAN.2018.8525584

Trovato, G. y Eyssel, F. (2017). Mind attribution to an- droids: A comparative study with Italian and Japanese adolescents. En 2017 26th IEEE International Sympo- sium on Robot and Human Interactive Communication (RO-MAN) (pp. 561–566). https://doi.org/10.1109/RO- MAN.2017.8172358

Urquiza-Haas, E. y Kotrschal, K. (2015). The mind be- hind anthropomorphic thinking: Attribution of mental sta- tes to other species. Animal Behaviour, 109, 167–176. ht- tps://doi.org/10.1016/j.anbehav.2015.08.011

Wang, X. y Krumhuber, E. (2018). Mind perception of robots varies with their economic versus social function. Frontiers in Psychology, 9, 1230. https://doi.org/10.3389/ fpsyg.2018.01230

Waytz, A.; Epley, N. y Cacioppo, J. (2010). So- cial cognition unbound: Insights into anthropomor- phism and dehumanization. Current Directions in Psychological Science, 19(1), 58–62. https://doi. org/10.1177/0963721409359302

Waytz, A.; Gray, K.; Epley, N. y Wegner, D. M. (2010). Cau- ses and consequences of mind perception. Trends in Cogni- tive Sciences, 14(8), 383–388. https://doi.org/10.1016/j. tics.2010.05.006

Waytz, A.; Morewedge, C.; Epley, N.; Monteleone, G.; Gao, J. y Cacioppo, J. (2010). Making sense by making sentient: Effectance motivation increases anthropo- morphism. Journal of Personality and Social Psychology, 99(3), 410–435. https://doi.org/10.1037/a0020240

Wellman, H. (1990). The child’s theory of mind. MIT Press.

Wellman, H. y Bartsch, K. (1988). Young children’s rea- soning about beliefs. Cognition, 30(3), 239–277. https:// doi.org/10.1016/0010-0277(88)90021-2

Westermann, R. (2000). Wissenschaftstheorie und Experimentalmethodik: Ein Lehrbuch zur Psycholo- gischen Methodenlehre [Philosophy of science and experimental methods: A textbook on psychological methdology]. Hogrefe.

Yogeeswaran, K.; Złotowski, J.; Livingstone, M.; Bart- neck, C.; Sumioka, H. y Ishiguro, H. (2016). The interactive effects of robot anthropomorphism and robot ability on perceived threat and support for robotics research. Jour- nal of Human–Robot Interaction, 5(2), 29. https://doi. org/10.5898/JHRI.5.2.Yogeeswaran

Zhang, T.; Zhu, B.; Lee, L. y Kaber, D. (2008). Service ro- bot anthropomorphism and interface design for emotion in human–robot interaction. En 2008 IEEE International Conference on Automation Science and Engineering (pp. 674–679). https://doi.org/10.1109/COASE.2008.4626532

Złotowski, J.; Proudfoot, D.; Yogeeswaran, K. y Bart- neck, C. (2015). Anthropomorphism: Opportunities and challenges in human–robotinteraction. International Journal of Social Robotics, 7(3), 347–360. https://doi. org/10.1007/s12369-014-0267-6

Złotowski, J.; Strasser, E. y Bartneck, C. (2014). Dimensions of anthropomorphism: From humanness tohumanlikeness. En Proceedings of the 2014 ACM/IEEE International Confe- rence on Human–Robot Interaction - IHR ’14 (pp. 66–73). ht- tps://doi.org/10.1145/2559636.2559679

Descargas

Publicado

13-06-2024

Cómo citar

Kühne, R., & Peter, J. (2024). Antropomorfismo en las interacciones entre humanos y robots: una conceptualización multidimensional. Observador Del Conocimiento, 9(1), 39–59. Recuperado a partir de https://revistaoc.oncti.gob.ve/index.php/odc/article/view/652

Número

Sección

Artículos