|本期目录/Table of Contents|

[1]向安玲.无以信,何以立:人机交互中的可持续信任机制[J].未来传播(浙江传媒学院学报),2024,(02):29-41.
 XIANG An-ling.Without Trust, without Cooperation: A Study on Sustainable Trust Mechanisms in Human-Computer Interaction[J].FUTURE COMMUNICATION,2024,(02):29-41.
点击复制

无以信,何以立:人机交互中的可持续信任机制()
分享到:

《未来传播》(浙江传媒学院学报)[ISSN:2096-8418/CN:33-1334/G2]

卷:
期数:
2024年02期
页码:
29-41
栏目:
智能传播
出版日期:
2024-04-20

文章信息/Info

Title:
Without Trust, without Cooperation: A Study on Sustainable Trust Mechanisms in Human-Computer Interaction
文章编号:
2096-8418(2024)02-0029-13
作者:
向安玲
(中央民族大学新闻与传播学院,北京100081)
Author(s):
XIANG An-ling
关键词:
生成式人工智能 人机交互 可持续信任 影响因素
分类号:
TP18-02
DOI:
-
文献标志码:
A
摘要:
可持续信任是保障人机高频交互和高效协作的关键要素。基于计算扎根理论,文章针对7235条ChatGPT用户反馈进行编码分析,并由此提炼可持续信任的影响因素。研究发现,机器因素(可用性、易用性、可供性和安全性等)占比最大,其次为用户要素(技术恐惧、需求适配、媒介素养、心理预期),而任务因素(关键性失误、任务复杂度、违规成本)占比偏低。关键任务失败、机器安全性、用户需求适配度对用户信任水平影响最为显著。在人机之间建立清晰而互补的职责界限,用算法的可解释性对冲输出的不确定性,同时通过奖惩机制的“界面化”,引导用户调整心理预期,有助于可持续信任的动态校准。

参考文献/References:

[1] Baier, A.(1986).Trust and antitrust.Ethics,96(2):231-260.
[2] Taddeo, M.(2011). Defining trust and e-trust. International Journal of Technology and Human Interaction,5(2):23-35.
[3] Gambetta, D.(2000). Can we trust trust. Trust: Making and Breaking Cooperative Relations, 13(2000): 213-237.
[4] Bedué, P.& Fritzsche,A.(2022).Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. Journal of Enterprise Information Management, 35(2): 530-549.
[5] Ryan,M.(2020)In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics, 26(5): 2749-2767.
[6] Hurlburt,G.(2017). How much to trust artificial intelligence?It Professional, 19(4): 7-11.
[7] Glikson,E.& Woolley,A.W.(2020).Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2): 627-660.
[8] Siau,K.& Wang,W.(2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Gournal,31(2): 47-53.
[9] Gao, L.& Waechter, K.A.(2017)Examining the role of initial trust in user adoption of mobile payment services: An empirical investigation. Information Systems Frontiers, 19: 525-548.
[10] Hoehle, H.,Huff, S.& Goode,S.(2012).The role of continuous trust in information systems continuance. Journal of Computer Information Systems, 52(4): 1-9.
[11]Okamura, K.(2020). Adaptive trust calibration in human-AI cooperation, Ph.D. Dissertation. Kanagawa: The Graduate University of Advanced Studies.
[12] 朱翼.行为科学视角下人机信任的影响因素初探[J].国防科技,2021(4):4-9.
[13]Cabiddu, F., Moi, L., Patriotta, G.& Allen, D. G.(2022). Why do users trust algorithms? A review and conceptualization of initial trust and trust over time. European Management Journal, 40(5), 685-706.
[14] Luhmann, N.(1982). Trust and power. Studies in Soviet Thought.23(3):266-270.
[15] Luhmann, N.(1979). Trust and power. Chichester: John Wiley.
[16] Durante,M.(2010).What is the model of trust for multi-agent systems? Whether or not e-trust applies to autonomous agents. Knowledge, Technology & Policy, 23: 347-366.
[17] Hoff, K. A.& Bashir, M.(2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors, 57(3): 407-434.
[18] von Eschenbach,W.J.(2021)Transparency and the black box problem: Why we do not trust AI. Philosophy & Technology, 34(4): 1607-1622.
[19] Gursoy, D., Chi,O. H., Lu,L., et al.(2019). Consumers acceptance of artificially intelligent(AI)device use in service delivery. International Journal of Information Management, 49: 157-169.
[20] Troshani, I., Rao,H. S., Sherman,C., et al. Do we trust in AI? Role of anthropomorphism and intelligence. Journal of Computer Information Systems, 61(5): 481-491.
[21] Glikson E. & Woolley, A.W.(2020).Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2): 627-660.
[22] Bogert, E., Schecter,A.& Watson,R. T.(2021 ). Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific Reports, 11(1): 1-9.
[23] Taddeo,M.(2009). Defining trust and e-trust: From old theories to new problems. International Journal of Technology and Human Interaction(IJTHI), 5(2): 23-35.
[24] Nissenbaum,H.(2001). Securing trust online: Wisdom or oxymoron?Boston University Law Review, 81(3): 635-664.
[25] 何江新,张萍萍.从“算法信任”到“人机信任”路径研究[J].自然辩证法研究,2020(11):81-85.
[26] Siau,K.& Wang, W.(2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Gournal, 31(2): 47-53.
[27] Dorton, S.L.& Harper, S. B.(2022). A naturalistic investigation of trust, AI, and intelligence work. Journal of Cognitive Engineering and Decision Making, 16(4): 222-236.
[28] Sheridan, T.B.(2019). Individual differences in attributes of trust in automation: Measurement and application to system design. Frontiers in Psychology, 10: 1117.
[29] Kim,J., Giroux,M.& Lee,J. C.(2021).When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations. Psychology & Marketing, 38(7): 1140-1155.
[30] 杨子莹. 人机交互关系中的信任问题研究[D].大连理工大学,2022.
[31] 董文莉,方卫宁.自动化信任的研究综述与展望[J].自动化学报,2021(6):1183-1200.
[32]Hoffman, R. R.(2017). A taxonomy of emergent trusting in the human–machine relationship. In Philip, J. & Robert, R.(eds.). Cognitive systems engineering: The future for a changing world. Leiden: CRC Press, 137-164.
[33] Okamura,K.& Yamada,S.(2020). Adaptive trust calibration for human-AI collaboration. Plos One, 15(2): e0229132.
[34]Jacovi,A., Marasovi,A., Miller,T., et al.(2021). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. New York: Association for Computing Machinery, 624-635.
[35] 齐佳音,张亚.人—机器人信任修复与信任再校准研究[J].机器人产业,2021(4):26-38.
[36] 何贵兵,陈诚,何泽桐等.智能组织中的人机协同决策:基于人机内部兼容性的研究探索[J].心理科学进展,2022(12):2619-2627.
[37] 于雪.基于机器能动性的人机交互信任建构[J].自然辩证法研究,2022(10):43-49.
[38]Ezer,N., Bruni,S., Cai,Y., et al.(2019). Trust engineering for human-AI teams. In Proceedings of the human factors and ergonomics society annual meeting. Los Angeles: Sage Publications, 322-326.
[39]Berente, N.& Seidel, S. Big data & inductive theory development: Towards computational Grounded Theory? Retrieved March 5, 2024, from https://core.ac.uk/reader/301361940.
[40] Glaser, B.& Strauss,A.(1967). Grounded theory: The discovery of grounded theory. Sociology the Journal of the British Sociological Association, 12(1): 27-49.
[41] Çalikli, G.& Bener, A.(2013). Influence of confirmation biases of developers on software quality: An empirical study. Software Quality Journal, 21: 377-416.
[42] Hu, W.-L., Akash, K., Reid, T.& Jain, N.(2019). Computational modeling of the dynamics of humantrust during human–Machine interactions. IEEE Transactions on Human-Machine Systems, 49:485-497.
[43]Robinette, P., Howard, A. M.& Wagner, A. R.(2015). Timing is key for robot trust repair. In Proceedings of 7th international conference on social robotics. Paris: Springer International Publishing, 574-583.

相似文献/References:

备注/Memo

备注/Memo:
基金项目:国家自然科学基金青年项目“面向人工智能生成内容的风险识别与治理策略研究”(72304290)。
作者简介:向安玲,女,讲师,博士。
更新日期/Last Update: 2024-04-15