I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education

Simon Kloker, Matthew Bazanya, Twaha Kateete·June 21, 2024

Summary

This study investigates the impact of large language models (LLMs) like ChatGPT on lecturer-student trust in higher education, focusing on Ndejje University. Lecturers prioritize transparency in LLM usage, which fosters trust in team collaboration. The research highlights the need for guidelines that promote transparency to maintain academic integrity and ensure effective collaborative learning. While LLMs can enhance productivity, concerns arise about academic honesty, quality, and the potential for misuse. The study explores the interplay between technology, ethics, and trust, suggesting that students using LLMs as a starting point with critical evaluation can help maintain trust. The research employs an online questionnaire, involving 32 lecturers, to analyze the relationship between LLM usage, trust, and team performance, with a focus on informational and procedural justice. The study finds that transparency in LLM use positively influences trust and expected team performance, but also identifies limitations and calls for future research to refine trust measures and address cultural context.

Key findings

2

Paper digest

What problem does the paper attempt to solve? Is this a new problem?

The paper aims to address the impact of students' use of Large Language Models (LLMs) on the trust dynamics within Lecturer-Student Collaboration in higher education . Specifically, it delves into the intricate relationship between the escalating adoption of LLMs, such as ChatGPT, and the trust dynamics, introducing both opportunities and challenges for the educational landscape . This study focuses on the evolving technological advancements, ethical considerations, and the traditional foundations of trust in education and research to shape guidelines and strategies for the positive integration of LLMs in the learning and research process .

The problem addressed in the paper is not entirely new, as the rise of LLM use among students in higher education has been continually increasing, leading to concerns about quality, academic integrity, and the impact on trust relationships between lecturers and students . However, the specific focus on how students' LLM use affects Lecturer-Student Trust in higher education is a relatively recent and evolving issue that requires in-depth examination and understanding .


What scientific hypothesis does this paper seek to validate?

This paper seeks to validate the hypothesis that Perceived Large-Language-Model (LLM) Usage by students is accepted by lecturers in the context of higher education. The study aims to investigate the relationship between Perceived LLM Usage and various constructs such as Procedural Justice, Informational Justice, and Perceived Trustworthiness, highlighting the importance of transparency in students' use of LLMs to maintain trust in Lecturer-Student interactions .


What new ideas, methods, or models does the paper propose? What are the characteristics and advantages compared to previous methods?

The paper "I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education" proposes several new ideas, methods, and models related to the impact of students' use of Large Language Models (LLMs) on trust dynamics within Lecturer-Student Collaboration in higher education . Here are some key points from the paper:

  1. Framework for Redesigning Writing Assignment Assessment: The paper by Chiu (2023) suggests developing a framework to redesign writing assignment assessment for the era of Large Language Models (LLMs) . This framework aims to adapt assessment practices to accommodate the use of LLMs by students, ensuring fairness and transparency in the evaluation process.

  2. Effects of Generative Chatbots in Higher Education: Ilieva et al. (2023) explore the effects of Generative Chatbots, such as ChatGPT, in higher education . This study delves into how these AI models impact educational settings and student-teacher interactions, shedding light on the implications of integrating such technologies into the learning environment.

  3. Shaping the Future of Education with AI and ChatGPT: Grassini (2023) discusses the potential and consequences of AI and ChatGPT in educational settings . The paper explores how these technologies can shape the future of education, emphasizing the need to understand the implications of AI integration for effective teaching and learning practices.

  4. Trust, Collaboration, and Team Performance: The paper highlights the interconnectedness of trust, collaboration, and team performance in educational contexts . It emphasizes the importance of trust in fostering positive outcomes in lecturer-student collaboration, underscoring the need for trustworthy relationships to enhance team performance and academic success.

  5. Ethical Use of LLMs in Education: The study addresses the ethical considerations of AI Large Language Models, like ChatGPT, in the post-pandemic era . It discusses the academic integrity implications of using LLMs and emphasizes the need for guidelines to regulate their use transparently in educational settings.

Overall, the paper provides insights into the challenges and opportunities associated with students' utilization of LLMs, offering recommendations for promoting transparency, trust, and ethical use of AI-powered tools in higher education . The paper "I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education" introduces several characteristics and advantages compared to previous methods in the context of students' utilization of Large Language Models (LLMs) in higher education . Here are some key points highlighting these aspects:

  1. Acceptance of LLM Usage: The study reveals that lecturers exhibit acceptance of students' use of LLMs, such as ChatGPT, in the educational setting . This acceptance is evident in the perception of LLMs as fair tools by 87% of respondents, emphasizing a shift towards acknowledging the potential benefits of LLM utilization by students.

  2. Transparency and Trust Dynamics: A crucial characteristic identified is the emphasis on transparency in students' LLM use to maintain trust dynamics between lecturers and students . The study suggests that the transparency of student utilization of LLMs significantly influences Team Trust positively, highlighting the importance of clear communication and openness in the educational environment.

  3. Expected Team Performance: The research indicates a positive association between LLM usage and expected Team Performance, reflecting optimistic outlooks among lecturers regarding enhanced overall results when students employ LLMs . This finding underscores the potential benefits of integrating LLMs into educational practices, leading to improved outcomes in collaborative tasks.

  4. Challenges in Trust Measurement: The study identifies limitations in the Team Trust measure utilized, indicating the need for refinement and adaptation of trust constructs for more accurate assessments . This highlights the complexity of measuring trust within collaborative learning environments and the necessity for future studies to enhance trust evaluation methodologies.

  5. Contribution to Educational Policies: The paper contributes valuable insights for shaping policies that enable ethical and transparent usage of LLMs in education to ensure the effectiveness of collaborative learning environments . By addressing the impact of LLMs on trust dynamics, the study lays the groundwork for developing guidelines that support LLM utilization while fostering trust and performance in lecturer-student collaboration.

Overall, the paper's characteristics include a focus on transparency, acceptance of LLM usage, positive expectations for team performance, challenges in trust measurement, and contributions to educational policies regarding LLM integration in higher education .


Do any related researches exist? Who are the noteworthy researchers on this topic in this field?What is the key to the solution mentioned in the paper?

Several related research studies exist on the topic of trust in lecturer-student collaboration and the impact of students' use of Large Language Models (LLMs) in higher education. Noteworthy researchers in this field include Simon Kloker, Matthew Bazanya, Twaha Kateete, Abbas et al., Laal & Laal, Mendo-Lázaro et al., and Yang .

The key solution mentioned in the paper is to ensure transparency in students' use of LLMs to positively influence Team Trust in lecturer-student collaboration. This transparency is crucial for lecturers to trust the student-generated content and maintain a strong collaborative relationship, ultimately fostering effective team performance in higher education settings .


How were the experiments in the paper designed?

The experiments in the paper were designed through an online questionnaire that consisted of four sections .

  1. Introduction: Briefly explained the research's importance and requested thoughtful commitment from participants.
  2. Main Constructs Inquiry: Divided into three blocks with 10 questions each, utilizing a 5-point Likert-Scale for measurement. Two manipulation checks were included within these blocks.
  3. Demographics and Additional Control Variables Gathering: Collected information on demographics and other relevant variables.
  4. Conclusion: Concluded the survey with a short debriefing .

The survey was implemented using questionpro.com software and distributed among approximately 200 university lecturers at Ndejje University in Uganda via WhatsApp between January 10th and January 14th, 2024. Participation was voluntary, and no incentives were offered to the participants .


What is the dataset used for quantitative evaluation? Is the code open source?

The dataset used for quantitative evaluation in the study is an online questionnaire that was distributed among approximately 200 university lecturers at Ndejje University in Uganda . The study aimed for 200 contacts with an expected response rate of at least 30%, ideally 50%, but the assumption proved incorrect, resulting in a total of 32 responses . The code used for the study and whether it is open source is not explicitly mentioned in the provided context.


Do the experiments and results in the paper provide good support for the scientific hypotheses that need to be verified? Please analyze.

The experiments and results in the paper provide valuable insights into the scientific hypotheses that need to be verified, but there are some limitations that should be considered in the analysis:

  1. Sample Size and Response Rate: The study aimed for 200 contacts but only received 32 responses, with 23 remaining after applying manipulation checks . This low response rate could impact the generalizability of the findings and the ability to detect significant effects.

  2. Limitations in Testing for Effect Significance: The study did not reach the necessary sample size to test for effect significance as initially anticipated . This limitation could affect the robustness of the statistical analysis and the validity of the conclusions drawn from the results.

  3. Contextual Limitations: The study focused solely on the context of Uganda, while the literature's hypotheses were based on other geographical areas . Considering that trust and attitudes toward technology are influenced by culture and individual settings, this limitation suggests the need to contextualize the findings within different settings for a more comprehensive analysis.

In conclusion, while the experiments and results in the paper provide a foundation for understanding the relationship between students' LLM use and lecturer-student trust dynamics, the limitations in sample size, response rate, and contextual scope should be taken into account when evaluating the support for the scientific hypotheses that need verification. Further research with larger sample sizes and broader geographical contexts may be necessary to strengthen the validity and generalizability of the findings.


What are the contributions of this paper?

The paper titled "I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education" makes several significant contributions to the field:

  • It explores how the use of Large Language Models (LLMs) by students impacts Informational and Procedural Justice, influencing Team Trust and Expected Team Performance in higher education .
  • The study utilizes a quantitative construct-based survey and Structural Equation Modelling (PLS-SEM) to examine the relationships among these constructs .
  • The findings of the research, based on 23 valid respondents from Ndejje University, indicate that lecturers are more concerned about the transparency of student LLM utilization rather than the fairness of its use, which positively influences Team Trust .
  • The paper contributes to the global discourse on integrating and regulating LLMs in education, proposing guidelines that support LLM use while emphasizing transparency in Lecturer-Student-Collaboration to enhance Team Trust and Performance .
  • Overall, the study provides valuable insights for shaping policies that promote ethical and transparent usage of LLMs in educational settings to ensure the effectiveness of collaborative learning environments .

What work can be continued in depth?

To delve deeper into the impact of students' use of Large Language Models (LLMs) on Lecturer-Student-Trust in Higher Education, further research can focus on the following areas for continued exploration:

  1. Transparency and LLM Usage: Investigate the significance of transparency in students' utilization of LLMs on perceived Informational Justice and its influence on Lecturer-Student-Trust. Prior studies suggest that maintaining transparency in LLM use positively affects Team Trust .

  2. Expected Team Performance: Explore the relationship between LLM usage and the expected performance of teams in collaborative tasks like seminar papers or theses. While there are positive expectations regarding students using LLMs for better overall results, further evaluation is needed to verify the actual impact on Team Performance .

  3. Trust Measures and Team Dynamics: Evaluate the adequacy of current trust measures, such as Team Trust, in capturing the complexities of Lecturer-Student-Collaboration involving LLMs. Consider refining trust constructs to better reflect the evolving dynamics influenced by AI technologies like LLMs .

  4. Ethical Use of LLMs: Examine the ethical implications of LLMs in educational settings, focusing on issues like academic integrity, quality concerns, and the potential misuse of generated content. Addressing these ethical dilemmas is crucial to ensure the integrity of educational practices .

By delving deeper into these aspects, future research can provide valuable insights into the evolving landscape of Lecturer-Student-Trust in the context of increasing LLM usage in higher education, paving the way for more effective guidelines and practices in collaborative learning environments.

Tables

3

Introduction
Background
Emergence of LLMs like ChatGPT
Increasing use in academia
Objective
To assess the influence of LLMs on trust
To promote transparency for academic integrity
To explore the role of critical evaluation in trust maintenance
Method
Data Collection
Research Design
Online questionnaire survey
Sample
32 lecturers at Ndejje University
Instrument
Custom questionnaire on LLM usage, trust, and team performance
Data Preprocessing
Data cleaning
Data validation
Questionnaire reliability and validity
LLM Usage and Trust Dynamics
Informational Justice
Transparency in LLM sharing
Impact on perceived fairness
Procedural Justice
Guidelines for LLM use in teaching
Trust implications of clear procedures
Effects on Team Performance
Enhanced productivity through LLMs
Trust as a mediator in performance outcomes
Limitations and Challenges
Academic honesty concerns
Quality of work and potential misuse
Cultural context and its influence
Recommendations for Future Research
Refining trust measures
Addressing cultural variations
Developing guidelines for responsible LLM integration
Conclusion
The importance of transparency in maintaining trust
Balancing benefits and challenges of LLMs in education
Encouraging critical thinking in LLM adoption.
Basic info
papers
emerging technologies
human-computer interaction
computers and society
machine learning
artificial intelligence
Advanced features
Insights
How do the researchers address the balance between technology, ethics, and trust in their study?
At which university does the study center its investigation?
What is the primary focus of the study regarding LLMs and their impact on higher education?
What aspect of LLM usage do lecturers prioritize to foster trust in team collaboration?

I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education

Simon Kloker, Matthew Bazanya, Twaha Kateete·June 21, 2024

Summary

This study investigates the impact of large language models (LLMs) like ChatGPT on lecturer-student trust in higher education, focusing on Ndejje University. Lecturers prioritize transparency in LLM usage, which fosters trust in team collaboration. The research highlights the need for guidelines that promote transparency to maintain academic integrity and ensure effective collaborative learning. While LLMs can enhance productivity, concerns arise about academic honesty, quality, and the potential for misuse. The study explores the interplay between technology, ethics, and trust, suggesting that students using LLMs as a starting point with critical evaluation can help maintain trust. The research employs an online questionnaire, involving 32 lecturers, to analyze the relationship between LLM usage, trust, and team performance, with a focus on informational and procedural justice. The study finds that transparency in LLM use positively influences trust and expected team performance, but also identifies limitations and calls for future research to refine trust measures and address cultural context.
Mind map
Cultural context and its influence
Quality of work and potential misuse
Academic honesty concerns
Trust implications of clear procedures
Guidelines for LLM use in teaching
Impact on perceived fairness
Transparency in LLM sharing
Questionnaire reliability and validity
Data validation
Data cleaning
Custom questionnaire on LLM usage, trust, and team performance
Instrument
32 lecturers at Ndejje University
Sample
Online questionnaire survey
Research Design
To explore the role of critical evaluation in trust maintenance
To promote transparency for academic integrity
To assess the influence of LLMs on trust
Increasing use in academia
Emergence of LLMs like ChatGPT
Encouraging critical thinking in LLM adoption.
Balancing benefits and challenges of LLMs in education
The importance of transparency in maintaining trust
Developing guidelines for responsible LLM integration
Addressing cultural variations
Refining trust measures
Limitations and Challenges
Procedural Justice
Informational Justice
Data Preprocessing
Data Collection
Objective
Background
Conclusion
Recommendations for Future Research
Effects on Team Performance
LLM Usage and Trust Dynamics
Method
Introduction
Outline
Introduction
Background
Emergence of LLMs like ChatGPT
Increasing use in academia
Objective
To assess the influence of LLMs on trust
To promote transparency for academic integrity
To explore the role of critical evaluation in trust maintenance
Method
Data Collection
Research Design
Online questionnaire survey
Sample
32 lecturers at Ndejje University
Instrument
Custom questionnaire on LLM usage, trust, and team performance
Data Preprocessing
Data cleaning
Data validation
Questionnaire reliability and validity
LLM Usage and Trust Dynamics
Informational Justice
Transparency in LLM sharing
Impact on perceived fairness
Procedural Justice
Guidelines for LLM use in teaching
Trust implications of clear procedures
Effects on Team Performance
Enhanced productivity through LLMs
Trust as a mediator in performance outcomes
Limitations and Challenges
Academic honesty concerns
Quality of work and potential misuse
Cultural context and its influence
Recommendations for Future Research
Refining trust measures
Addressing cultural variations
Developing guidelines for responsible LLM integration
Conclusion
The importance of transparency in maintaining trust
Balancing benefits and challenges of LLMs in education
Encouraging critical thinking in LLM adoption.
Key findings
2

Paper digest

What problem does the paper attempt to solve? Is this a new problem?

The paper aims to address the impact of students' use of Large Language Models (LLMs) on the trust dynamics within Lecturer-Student Collaboration in higher education . Specifically, it delves into the intricate relationship between the escalating adoption of LLMs, such as ChatGPT, and the trust dynamics, introducing both opportunities and challenges for the educational landscape . This study focuses on the evolving technological advancements, ethical considerations, and the traditional foundations of trust in education and research to shape guidelines and strategies for the positive integration of LLMs in the learning and research process .

The problem addressed in the paper is not entirely new, as the rise of LLM use among students in higher education has been continually increasing, leading to concerns about quality, academic integrity, and the impact on trust relationships between lecturers and students . However, the specific focus on how students' LLM use affects Lecturer-Student Trust in higher education is a relatively recent and evolving issue that requires in-depth examination and understanding .


What scientific hypothesis does this paper seek to validate?

This paper seeks to validate the hypothesis that Perceived Large-Language-Model (LLM) Usage by students is accepted by lecturers in the context of higher education. The study aims to investigate the relationship between Perceived LLM Usage and various constructs such as Procedural Justice, Informational Justice, and Perceived Trustworthiness, highlighting the importance of transparency in students' use of LLMs to maintain trust in Lecturer-Student interactions .


What new ideas, methods, or models does the paper propose? What are the characteristics and advantages compared to previous methods?

The paper "I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education" proposes several new ideas, methods, and models related to the impact of students' use of Large Language Models (LLMs) on trust dynamics within Lecturer-Student Collaboration in higher education . Here are some key points from the paper:

  1. Framework for Redesigning Writing Assignment Assessment: The paper by Chiu (2023) suggests developing a framework to redesign writing assignment assessment for the era of Large Language Models (LLMs) . This framework aims to adapt assessment practices to accommodate the use of LLMs by students, ensuring fairness and transparency in the evaluation process.

  2. Effects of Generative Chatbots in Higher Education: Ilieva et al. (2023) explore the effects of Generative Chatbots, such as ChatGPT, in higher education . This study delves into how these AI models impact educational settings and student-teacher interactions, shedding light on the implications of integrating such technologies into the learning environment.

  3. Shaping the Future of Education with AI and ChatGPT: Grassini (2023) discusses the potential and consequences of AI and ChatGPT in educational settings . The paper explores how these technologies can shape the future of education, emphasizing the need to understand the implications of AI integration for effective teaching and learning practices.

  4. Trust, Collaboration, and Team Performance: The paper highlights the interconnectedness of trust, collaboration, and team performance in educational contexts . It emphasizes the importance of trust in fostering positive outcomes in lecturer-student collaboration, underscoring the need for trustworthy relationships to enhance team performance and academic success.

  5. Ethical Use of LLMs in Education: The study addresses the ethical considerations of AI Large Language Models, like ChatGPT, in the post-pandemic era . It discusses the academic integrity implications of using LLMs and emphasizes the need for guidelines to regulate their use transparently in educational settings.

Overall, the paper provides insights into the challenges and opportunities associated with students' utilization of LLMs, offering recommendations for promoting transparency, trust, and ethical use of AI-powered tools in higher education . The paper "I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education" introduces several characteristics and advantages compared to previous methods in the context of students' utilization of Large Language Models (LLMs) in higher education . Here are some key points highlighting these aspects:

  1. Acceptance of LLM Usage: The study reveals that lecturers exhibit acceptance of students' use of LLMs, such as ChatGPT, in the educational setting . This acceptance is evident in the perception of LLMs as fair tools by 87% of respondents, emphasizing a shift towards acknowledging the potential benefits of LLM utilization by students.

  2. Transparency and Trust Dynamics: A crucial characteristic identified is the emphasis on transparency in students' LLM use to maintain trust dynamics between lecturers and students . The study suggests that the transparency of student utilization of LLMs significantly influences Team Trust positively, highlighting the importance of clear communication and openness in the educational environment.

  3. Expected Team Performance: The research indicates a positive association between LLM usage and expected Team Performance, reflecting optimistic outlooks among lecturers regarding enhanced overall results when students employ LLMs . This finding underscores the potential benefits of integrating LLMs into educational practices, leading to improved outcomes in collaborative tasks.

  4. Challenges in Trust Measurement: The study identifies limitations in the Team Trust measure utilized, indicating the need for refinement and adaptation of trust constructs for more accurate assessments . This highlights the complexity of measuring trust within collaborative learning environments and the necessity for future studies to enhance trust evaluation methodologies.

  5. Contribution to Educational Policies: The paper contributes valuable insights for shaping policies that enable ethical and transparent usage of LLMs in education to ensure the effectiveness of collaborative learning environments . By addressing the impact of LLMs on trust dynamics, the study lays the groundwork for developing guidelines that support LLM utilization while fostering trust and performance in lecturer-student collaboration.

Overall, the paper's characteristics include a focus on transparency, acceptance of LLM usage, positive expectations for team performance, challenges in trust measurement, and contributions to educational policies regarding LLM integration in higher education .


Do any related researches exist? Who are the noteworthy researchers on this topic in this field?What is the key to the solution mentioned in the paper?

Several related research studies exist on the topic of trust in lecturer-student collaboration and the impact of students' use of Large Language Models (LLMs) in higher education. Noteworthy researchers in this field include Simon Kloker, Matthew Bazanya, Twaha Kateete, Abbas et al., Laal & Laal, Mendo-Lázaro et al., and Yang .

The key solution mentioned in the paper is to ensure transparency in students' use of LLMs to positively influence Team Trust in lecturer-student collaboration. This transparency is crucial for lecturers to trust the student-generated content and maintain a strong collaborative relationship, ultimately fostering effective team performance in higher education settings .


How were the experiments in the paper designed?

The experiments in the paper were designed through an online questionnaire that consisted of four sections .

  1. Introduction: Briefly explained the research's importance and requested thoughtful commitment from participants.
  2. Main Constructs Inquiry: Divided into three blocks with 10 questions each, utilizing a 5-point Likert-Scale for measurement. Two manipulation checks were included within these blocks.
  3. Demographics and Additional Control Variables Gathering: Collected information on demographics and other relevant variables.
  4. Conclusion: Concluded the survey with a short debriefing .

The survey was implemented using questionpro.com software and distributed among approximately 200 university lecturers at Ndejje University in Uganda via WhatsApp between January 10th and January 14th, 2024. Participation was voluntary, and no incentives were offered to the participants .


What is the dataset used for quantitative evaluation? Is the code open source?

The dataset used for quantitative evaluation in the study is an online questionnaire that was distributed among approximately 200 university lecturers at Ndejje University in Uganda . The study aimed for 200 contacts with an expected response rate of at least 30%, ideally 50%, but the assumption proved incorrect, resulting in a total of 32 responses . The code used for the study and whether it is open source is not explicitly mentioned in the provided context.


Do the experiments and results in the paper provide good support for the scientific hypotheses that need to be verified? Please analyze.

The experiments and results in the paper provide valuable insights into the scientific hypotheses that need to be verified, but there are some limitations that should be considered in the analysis:

  1. Sample Size and Response Rate: The study aimed for 200 contacts but only received 32 responses, with 23 remaining after applying manipulation checks . This low response rate could impact the generalizability of the findings and the ability to detect significant effects.

  2. Limitations in Testing for Effect Significance: The study did not reach the necessary sample size to test for effect significance as initially anticipated . This limitation could affect the robustness of the statistical analysis and the validity of the conclusions drawn from the results.

  3. Contextual Limitations: The study focused solely on the context of Uganda, while the literature's hypotheses were based on other geographical areas . Considering that trust and attitudes toward technology are influenced by culture and individual settings, this limitation suggests the need to contextualize the findings within different settings for a more comprehensive analysis.

In conclusion, while the experiments and results in the paper provide a foundation for understanding the relationship between students' LLM use and lecturer-student trust dynamics, the limitations in sample size, response rate, and contextual scope should be taken into account when evaluating the support for the scientific hypotheses that need verification. Further research with larger sample sizes and broader geographical contexts may be necessary to strengthen the validity and generalizability of the findings.


What are the contributions of this paper?

The paper titled "I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education" makes several significant contributions to the field:

  • It explores how the use of Large Language Models (LLMs) by students impacts Informational and Procedural Justice, influencing Team Trust and Expected Team Performance in higher education .
  • The study utilizes a quantitative construct-based survey and Structural Equation Modelling (PLS-SEM) to examine the relationships among these constructs .
  • The findings of the research, based on 23 valid respondents from Ndejje University, indicate that lecturers are more concerned about the transparency of student LLM utilization rather than the fairness of its use, which positively influences Team Trust .
  • The paper contributes to the global discourse on integrating and regulating LLMs in education, proposing guidelines that support LLM use while emphasizing transparency in Lecturer-Student-Collaboration to enhance Team Trust and Performance .
  • Overall, the study provides valuable insights for shaping policies that promote ethical and transparent usage of LLMs in educational settings to ensure the effectiveness of collaborative learning environments .

What work can be continued in depth?

To delve deeper into the impact of students' use of Large Language Models (LLMs) on Lecturer-Student-Trust in Higher Education, further research can focus on the following areas for continued exploration:

  1. Transparency and LLM Usage: Investigate the significance of transparency in students' utilization of LLMs on perceived Informational Justice and its influence on Lecturer-Student-Trust. Prior studies suggest that maintaining transparency in LLM use positively affects Team Trust .

  2. Expected Team Performance: Explore the relationship between LLM usage and the expected performance of teams in collaborative tasks like seminar papers or theses. While there are positive expectations regarding students using LLMs for better overall results, further evaluation is needed to verify the actual impact on Team Performance .

  3. Trust Measures and Team Dynamics: Evaluate the adequacy of current trust measures, such as Team Trust, in capturing the complexities of Lecturer-Student-Collaboration involving LLMs. Consider refining trust constructs to better reflect the evolving dynamics influenced by AI technologies like LLMs .

  4. Ethical Use of LLMs: Examine the ethical implications of LLMs in educational settings, focusing on issues like academic integrity, quality concerns, and the potential misuse of generated content. Addressing these ethical dilemmas is crucial to ensure the integrity of educational practices .

By delving deeper into these aspects, future research can provide valuable insights into the evolving landscape of Lecturer-Student-Trust in the context of increasing LLM usage in higher education, paving the way for more effective guidelines and practices in collaborative learning environments.

Tables
3
Scan the QR code to ask more questions about the paper
© 2025 Powerdrill. All rights reserved.