Skip to main content

Assessing the Effectiveness of Using Live Interactions and Feedback to Increase Engagement in Online Learning: Assessing the Effectiveness of Using Live Interactions and Feedback to Increase Engagement in Online Learning

Assessing the Effectiveness of Using Live Interactions and Feedback to Increase Engagement in Online Learning
Assessing the Effectiveness of Using Live Interactions and Feedback to Increase Engagement in Online Learning
    • Notifications
    • Privacy
  • Issue HomeJournal of Interactive Technology and Pedagogy, no. 19
  • Journals
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Assessing the Effectiveness of Using Live Interactions and Feedback to Increase Engagement in Online Learning
    1. Abstract
    2. Introduction
      1. Current study
    3. Materials and Methods
      1. The Riff Platform and its user-facing applications
        1. Video chat
        2. Post-meeting metrics
        3. Text chat
        4. Learner data privacy
        5. Supporting accessibility
      2. Experiment design
    4. Results
      1. Results for questions 1–2 for students who completed the course
      2. Results for questions 3–6 for all students
    5. Summary, Discussion, and Conclusions
      1. Summary
      2. Discussion and Conclusions
    6. Bibliography
    7. Appendix A—Exit Survey Questions and Key Results
      1. A1. Exit Survey Questions
      2. A2. Key Results
        1. General
        2. Collaborative exercises
    8. Appendix B—Definitions
    9. Acknowledgments
    10. About the Authors

Assessing the Effectiveness of Using Live Interactions and Feedback to Increase Engagement in Online Learning

May 11, 2021

Beth Porter, Riff Analytics

John Doucette, Riff Analytics

Andrew Reilly, New College of Florida

Dan Calacci, MIT Media Lab

Burcin Bozkaya, New College of Florida

Alex Pentland, MIT Media Lab

Abstract

Traditional, in-person instruction provides a social environment and immediate feedback mechanisms that typically ensure participants are successful. In comparison, online mediums for instruction lack these mechanisms and rely on the motivation and persistence of each individual learner, often resulting in low completion rates. In this study, we examined the effect of introducing enabling tools and live feedback into an online learning experience on learner performance and persistence in the course, as well as election to complete supplemental readings and assignments. The findings from our experiments show positive correlations with strong statistical significance between live interactions and all performance measures studied. Specifically, we find that students who consistently participated (at least once a week on average) received final grades 80% higher than those who did not and were twice as likely to earn a certificate of completion. Furthermore, the odds of a student receiving higher grades are increased by a factor of 23% for each additional hour spent using our enabling tools and the odds of passing the course by 35%. We contend that further exploration and implementation of such enabling tools will aid both instructors and learners, as the online platforms may continue to be the preferred choice of instruction even after the pandemic subsides.

Introduction

Achieving and maintaining consistent student engagement can be difficult in online learning contexts (Kizilcec and Halawa 2015; Allen and Seaman 2015, 61; Greene et al. 2015). There are many factors that impact learner engagement: course structure, amount of lecture time, user experience of the learning management system, and countless other variables. However, online educators and scholars of online learning have consistently found evidence that peer interaction and learner-instructor interaction are strongly related to learner engagement (Bryson 2016; Garrison and Cleveland-Innes 2005; Greene et al. 2015; Richardson and Swan 2003; Yamada 2009). Several studies have found that measures of social presence, a theoretical construct originating from communications studies, are significantly related to learners’ overall outcomes and perceived experience (Richardson and Swan 2003). Social presence refers to how “salient” an interpersonal interaction is—how readily one feels like they are “present” with another.

To achieve high social presence between students, researchers have discovered that the modality of online communication is extremely important (Yamada 2009; Kuyath 2008; Johnson 2006). The modality of communication refers to the types of channels used: text chat, forums, voice chat, video conference, etc. In communications, human-computer interaction, and online education literature, it has been shown that richer modes of communication between conversing members lead to higher rates of trust, cooperation, engagement, and social presence (Bos et al. 2002). For example, video conference, which includes real time video and voice, is a richer mode of communication than asynchronous forums, and has been shown to lead to higher rates of cooperation than text chat (Bos et al. 2002).

These advances provide strong evidence that the interaction patterns of people carry rich signals that correlate with a variety of outcomes important to personal growth, learning, and team success. By using modern machine learning methods and statistics of conversation patterns—who spoke when, for how long, etc.—even without knowing the content of these conversations, researchers have been able to predict speed dating outcomes, group brainstorming success, and group achievement on a test of “collective intelligence” (Pentland et al. 2006, Woolley et al. 2010; Dong and Pentland 2010; Jayagopi et al. 2010; Kim et al. 2008; Dong et al. 2012a, 2012b). By developing more sophisticated computational models of human interaction, it is possible to infer social roles, such as group leaders or followers (Dong et al. 2013).

Current study

To show the effectiveness of online learning systems that contribute to strong learning outcomes and learner performance via tools that foster participation and interaction, we present an experimental study in this paper conducted with an online class. Our learning platform, which is used to conduct our experiments, allows students to interact using video chat and text chat. In addition, students get continuous feedback on their participation and engagement in the video chat via a tool we named Meeting Mediator (MM). MM essentially provides feedback on how the class participant is doing in terms of measures of engagement such as speaking time, turn-taking, influencing or affirming other participants, and interrupting other speakers.

The Meeting Mediator integrated into our online learning platform is a further enhancement of the concepts and ideas introduced by Kim et al. (2008), who implemented a similar user feedback mechanism in a face-to-face setting with data collected via sociometric badges worn by users interacting in small groups. Our platform includes a new version of the MM, which introduces new metrics for key conversational events such as interruptions and affirmations. In this study, we validate the effectiveness of this online MM through a number of experiments we conduct.

Part of the innovation at the core of our study is the burgeoning science of quantitatively analyzing human social interaction. This small but exciting subfield of computational social science has shown immense promise in developing machine understanding of human interaction (Woolley et al. 2010). By instrumenting many different types of communication, such as face-to-face physical conversation, video conference, and text chat, researchers have been able to predict group performance on a variety of tasks.

Though this study was conducted prior to the 2020 pandemic, the challenge presented to the team carrying out the experiment was to understand the factors contributing to successful transition of programs to online environments. The platform was designed to foster virtual engagement, providing course developers and instructors new tools for offering online experiences likely to capture and keep learner interest. This is especially critical in a time when the pandemic has forced millions of students and professional learners around the globe to use online platforms, replacing the traditional in-person instruction.

An important point to emphasize is that we do not make a claim to replace in-person learning with technology-enabled online learning. We acknowledge that remote instruction may present certain technology-driven features and advantages, but there are many ways in-person instruction provides a superior learning experience. Through the experiments we conduct in this study and our findings, we contend that the remote learning experience can be enhanced and the gap between in-person versus online learning can be bridged to some extent using real-time feedback mechanisms such as the Meeting Mediator and interaction metrics. This is clearly good news in times of a pandemic like COVID-19, where instructors as well as learners are challenged in many unique ways.

Materials and Methods

We first describe our user-facing applications, supporting backend tools and the metrics we designed and employed for performance assessment. Then, we present our experiment design.

The Riff Platform and its user-facing applications

Our team developed three foundational user-facing applications and supporting backend tools as part of the Riff Platform. These are:

  • Video Chat (with Meeting Mediator)
  • Post-Meeting Metrics
  • Text Chat (which includes Video Chat and Post-Meeting Metrics)

These applications and tools were built for both commercial and research purposes, cloud-deployed in standardized ways, but customized over time to meet the changing needs of Riff customers and modified research goals. (An open version of Riff Video Chat, called Riff Remote, is available here: https://my.riffremote.com/.)

Video chat

Riff Video Chat (Figure 1) is an in-browser video chat application that uses the computer’s microphone to collect voice data for analysis and is present in all manifestations of the Riff Platform. Riff Video does not require plug-ins or a local application in order to run—participants navigate to a URL (either by entering it directly into the address bar or through an authenticated redirect from another application), enable their camera and microphone, and then enter the video chat room. Other features of the video chat include screen sharing, microphone muting, and the ability to load a document (typically one with shared access) side-by-side with the video on screen.

A key element of Riff Video is the Meeting Mediator (MM) (Figure 2), which gives participants of the video chat real-time feedback about their speaking time. Specifically, it provides three metrics:

Figure 1: The Riff Video browser-based video conferencing
client with Meeting Mediator shown.
Figure 1. Riff Video browser-based video conferencing client.
  • Engagement, as indicated by the shade of purple of the node in the middle of the visualization, which shows the total number of turns taken in running five–minute intervals throughout the chat—dark purple means many turns; light purple means fewer turns;
  • Influence, as indicated by the location of the purple node, which moves toward the person who has taken the largest number of turns in running five–minute intervals throughout the chat;
  • Dominance, as indicated by the thickness of the grey “spokes” running from the central purple node out to each of the participant nodes, which indicates the number of turns people have taken in running five–minute intervals.
Figure 2: The Riff Meeting Mediator, with hover information
“Name”, “Speaking Time” and “Turn Count” shown.
Figure 2. Hover states of the Meeting Mediator showing different metrics.

These metrics are continuously updated throughout the video chat, lagging one or two seconds behind the live conversation. In a meeting, this tool acts as a nudging mechanism, which is essentially an intervention by the system aimed at increasing a participant’s self and situational awareness, influencing a participant’s behavior towards a more rewarding learning experience and outcomes. The MM indirectly encourages users to maintain conversational balance by speaking up (if the user sees they have not spoken recently) or to engage others who have not been as active in the conversation.

Post-meeting metrics

Immediately after a Riff Video Chat meeting, Metrics (Figure 3) appear on screen to show participants measures associated with the meeting, specifically:

  • Speaking Time—a chart showing the percentages of turns taken by each participant;
  • Pairwise Comparisons—each of the following charts shows the pairwise comparisons of the individual participant with respect to each other participant, to show the balance of each pair in their interactions:
    • Influences—a chart showing who spoke after whom in the video chat, which aggregates data across all types of spoken events;
    • Interruptions—a chart showing interruptions (when someone cuts off another person’s speaking turn);
    • Affirmations—a chart showing affirmations (when someone has a verbalization, but does not cut off another person’s speaking turn);
  • Timeline—an interactive view of the video chat meeting over time, on top showing what time each participant spoke and below when each pairwise interaction took place (“your” and “their” interruptions, affirmations, and influences).
Figure 3: Meeting metrics (timeline, speaking time, influences,
interruptions and affirmations).
Figure 3. Meeting metrics (timeline, speaking time, influences, interruptions and affirmations).

Additionally, each user can access their past history of meetings at any time, independent of the video chat itself (Figure 4).

Figure 4: Meeting history, in chronological order.
Figure 4. Meeting history, in chronological order.

Text chat

Riff Text Chat (also known as Riff EDU) is an environment that allows people to text chat with one another in open channels, private channels, and direct messages (Figure 5).

Figure 5: Riff EDU, a channel-based text chat environment.
Figure 5. Riff EDU, a channel-based text chat environment.

Riff EDU is built on top of the Mattermost platform, which is an open-source project (https://mattermost.com/). The Riff team developed a customized version of this platform which has three key additions (and several smaller modifications):

  • LTI integration, which allows the learner to move seamlessly from their learning environment (Canvas, Open edX, etc.) using the learning tools interoperability (LTI) protocol for authentication;
  • Video integration, which allows the learner to start an ad hoc video chat session from the Chat environment;
  • Metrics integration, which shows the learner views of their video meeting metrics within the Chat environment, as well as additional text chat data and recommendations.

Learner data privacy

At the outset of the course, learners were asked to consent to have data collected about their activity and performance. No learners declined to have data collected about them. Data collected during this experiment was shared directly with learners through the Meeting Mediator and Post-Meeting Metrics page. Everyone in a video chat got access to the data about their conversation, but video and text chat data was not shared with anyone else during the experience. Data analyzed for the purpose of the experiment was anonymized to preserve the privacy of participants.

Supporting accessibility

The experimental environment was designed to account for participants with accessibility needs. Following the WCAG 2.0 AA accessibility standard from the W3C (World Wide Web Consortium 2021) the software underwent an accessibility audit to identify potential issues, which were then remediated before the platform was released to learners. Accessibility compliance was documented using a publicly available Voluntary Participation Accessibility Template (Riff Analytics 2021). Though we did not have any students who let us know that they had specific accessibility needs, we were prepared for that eventuality.

Experiment design

To show the effectiveness of our video conferencing environment and our nudging tool, the Meeting Mediator (MM), we conducted an experiment with a major client of Riff Learning Inc. in Canada. In this experiment, test subjects were recruited to participate in an online course called “AI Strategy and Application” from a government subsidized incubator of tech companies headquartered in Toronto, Ontario, Canada. Students elected to take the course as a professional development experience, driven by interest in learning how to start an AI initiative either as an intrapreneurial or entrepreneurial activity. In most cases, the learners were paying for the course, but in some cases, either the course had been paid for by their employer or they were given free access to the course. Learners were either students in advanced degree programs or full-time professionals taking the course as a supplemental learning experience.

The eight-week course was delivered on the Open edX platform with the Riff Platform serving as the communications and collaboration platform for instructor and learner interactions, both structured and ad-hoc. Learners had opportunities to view videos and read original materials on how AI is applied to business problems, tackle hands-on coding exercises to learn basic machine learning techniques in a sandbox environment, and take assessments at the end of each learning unit.

The participants were divided into groups throughout the course to facilitate the following two types of activities:

  • Brainstorming activities (in pre-set groups of four to six people)—Each week during the first four weeks of the course, people met on video in their Peer Learning Groups (PLG) to collaborate on the assignment, and then create a shared submission.
  • Capstone project collaboration (in self-selected groups of four to six people, plus a mentor)—Each week during the last four weeks of the course, people met on video in their Capstone Groups to collaborate on developing a venture plan based on the winning pitches submitted by their peers in the course.

Learners were left to schedule video meetings with their group members on their own, and the course was time-released (one week of material at a time), but self-paced. However, the course support staff did participate in the first meeting of each group, to ensure that people didn’t have any technical issues and could make the video chats happen without issue.

The main goal of our experiment was to analyze the relationship between using video and text chat, and various outcomes among learners in the course. Furthermore, we aimed to analyze differences, if any, between early users (those who completed at least the first half of the course) versus the rest, to understand the effects of early usage on retention and grades.

Note that in this experiment, the Meeting Mediator (MM) was always present for video chats. Our rationale for that choice was that if our hypotheses that video with MM is positively correlated with any of performance, persistence, or satisfaction in online courses, then having it present for some students and not for others in a paid, graded learning experience might disadvantage the control population and be deemed inequitable.

In our experiment, we set out to test a number of hypotheses and hence show the (positive) relationship (or its lack thereof) between the use of video and various outcomes. Specifically, we explored the following questions:

  1. Did students who used Riff Video Chat (and Meeting Mediator) more often receive higher grades in the course?
  2. Did students who used Riff Video Chat (and Meeting Mediator) more often complete the course at a higher rate?
  3. Is there a strong positive association between participating in additional Riff Video Chats during the first four weeks of the course and earning a certificate?
  4. Is there a strong positive association between participating in additional Riff Video Chats during the first four weeks of the course and receiving higher grades at the end of the course?
  5. Is there a strong positive association between participating in additional Riff Video Chats during the first four weeks of the course and completing more of the optional course assignments?
  6. Is there a strong positive association between participating in additional Riff Video Chats during the first four weeks of the course and pitching a capstone project topic?

The first two research questions are explored for students who completed the entire course (n=62); we removed the chat records and performance output results for students who dropped part way. The remaining questions are explored for the entire cohort students of n=83. These students all started the course, but some of them dropped out and, hence, did not receive a final grade for completion or a certificate.

In our analysis of our experiment results, we used the following three constructs for reporting and interpreting our figures: correlations, odds ratios, and significance. For definitions of these metrics and their implications for our study, we refer the reader to Appendix B. With correlation analysis, we would like to further note that while one may observe the degree to which people who do one thing more (e.g., use video chat) also tend to do something else more (e.g., receive higher grades), such a relationship cannot be directly used to imply causality; they are merely indicators that two phenomena occur together (in a positive or negative relationship) or not.

Finally, we conducted an exit survey for all students who completed the course and a separate survey for those who failed to complete it. While the results of this survey are not directly related to the quantitative analysis we report in the next section, they do provide some qualitative feedback from users on the efficacy of the Riff Platform. We have included the list of survey questions and some key results we collected from these surveys in the Appendix.

Results

In this section, we report the results of our experimental study for students who participated in the course for its entire duration (questions 1–2, n=62) and for all students who started with the course, even if they dropped out (questions 3–6, n=83) separately.

Results for questions 1–2 for students who completed the course

To answer the first question in our study (“Did students who used Riff Video Chat more often receive higher grades in the course?”), we considered the following output variables:

  • Final grade earned
  • Coding exercise grade
  • Capstone exercise grade
  • Collaboration exercise grade
  • Pitch video completion

For the second question (“Did students who used Riff Video Chat more often complete the course at a higher rate?”), we simply considered whether or not the student earned his/her certificate of completion in the course.

The following table, Table 1, shows the correlation between each output variable listed above and the input variable of “video calls made,” which is the number of times a student connected to the video chat. The p-values were further corrected using Holm’s method to maintain a FWER of 0.05. We find that all p-values are significant at 95% confidence level or better, and correlation values point to reasonably high levels of relationship with most of the output variables.

AttributeCorrelation to # Riff Calls MadenpSignificance
Final Grades0.50621.56e-04***
Coding Exercise Grades0.41622.54e-03**
Capstone Exercise Grades0.49621.56e-04***
Collaborative Exercise Grades0.27622.94e-02*
Pitch Video Completion0.37625.25e-03**
Certificate Earned0.50621.56e-04***
Table 1. Correlation between Riff Video usage and various performance variables.
*p < 0.5, **p < 0.01, ***p < 0.001

We further calculated the odds ratio for two of these variables, as shown in Table 2. Here, in addition to the “Certificate Earned” binary variable, the “Grades” variable was created as another binary variable indicating the student has received (or not) a passing grade based on his/her final grade.

AttributeOdds RationpSignificance
Grades1.236216.92e-03**
Certificate Earned1.35623.964-04***
Table 2. Odds ratios for the effect of attending one additional Riff Video call on final grade and certificate attainment obtained by fitting a logistic regression model.
*p < 0.5, **p < 0.01, ***p < 0.001

Table 2 with odds ratios greater than 1.0 suggests that spending additional time with video chat increases the likelihood of earning a passing grade and a final certificate of completion (see the Discussion section for more details).

We further illustrate the relationship between these two variables and our input variable (“video calls made”) in Figures 6 and 7. The figures display positively sloped trends which confirm the positive correlations reported in Table 1. That is, as the learners spend more time with video chat, they are more likely to earn higher grades and a certificate of completion.

Figure 6: A scatter plot showing the relationship between the students’ total Riff video usage and final grade. There is a regression line included showing a positive relationship between the two variables.
Figure 6. Relationship between Riff Video usage and students’ final grades.
Figure 7: A scatter plot showing the relationship between the students’ total Riff video usage and whether the student received a certificate for the course. There is a regression line included showing a positive relationship between the two variables.
Figure 7. Relationship between Riff Video usage and certificate achievement.

Results for questions 3–6 for all students

We explored the remaining research questions regarding the early usage (first four weeks) of Riff Video using data we collected from all students in this course. We note again that some students completed the course only partially and dropped out. They did, however, complete the first four weeks of the course. Here we used the start of the Capstone Project as the cutoff date for tracking how often the students were present in channels in which video chats were offered. We correlated this variable to the same performance variables we considered in the previous section to explore the research questions 3–6:

Tables 3 and 4 show the correlation values between the variables similar to the way they are presented in the previous section. All correlations were computed with Pearson’s method, and p values were corrected again using Holm’s method to maintain a FWER of 0.05. We find that not only are the correlation values in Table 3 generally higher than those in Table 1, the p-value significance levels are also stronger. Furthermore, higher odds ratios in Table 4 suggest that early video chat usage is even more determinant in achieving successful performance in the course.

AttributeCorrelation to # Riff Calls MadenpSignificance
Final Grades0.54835.40e-07****
Coding Exercise Grades0.42835.22e-05****
Capstone Exercise Grades0.50833.89e-06****
Collaboration Exercise Grades0.52832.07e-06****
Pitch Video Completion0.45833.95e-05****
Certificate Earned0.50833.89e-06****
Table 3. Correlation between early Riff Video usage and various performance variables.
*p < 0.5, **p < 0.01, ***p < 0.001, ****p<0.0001

AttributeOdds RationpSignificance
Grades1.79831.03e-04***
Certificate Earned2.00831.96e-05****
Table 4. Odds ratios for the effect of attending one additional Riff Video call on final grade and certificate attainment obtained by fitting a logistic regression model.
*p < 0.5, **p < 0.01, ***p < 0.001, ****p<0.0001

The relationship between these two variables and our input variable (“Early video usage”) is further illustrated in Figures 8 and 9. Red points in these figures indicate students who dropped the course. Note that in one case, a student indicated they were dropping the course, but then completed all work and earned a certificate. Similar to Figures 6 and 7, we also find that as the early video chat usage increases, the likelihood of earning higher grades and a certificate of completion also increases.

Figure 8: A scatter plot showing the relationship between the students’ early Riff video usage and final grade. There is a regression line included showing a positive relationship between the two variables.
Figure 8. Relationship between early Riff Video usage and students’ final grades.
Figure 9: A scatter plot showing the relationship between the students’ early Riff video usage and whether the student received a certificate for the course. There is a regression line included showing a positive relationship between the two variables.
Figure 9. Relationship between early Riff Video usage and certificate achievement.

Summary, Discussion, and Conclusions

Summary

The results of the experiment may be summarized as follows:

  • Each additional Riff Video Chat made during the first four weeks of the course predicts a doubling of the odds that the student will receive a certificate of course completion. (Fig. 9)
  • Each additional Riff Video Chat made during the first four weeks of the course predicts an increase in the odds of receiving a high grade by 79%. (Fig. 8)
  • Most benefits are accrued after participating in the first four to five video chats. Students who participated in more than four calls (an average of one per week) received final grades 80% higher than those who did not and were twice as likely to earn a certificate. (Tables 3–4 and Fig. 8–9)
  • We find significantly stronger evidence, as reported in Tables 1–4 and Figures 6–9, of measured metrics linked to user performance, than those reported in the literature by Kim et al. (2008).

Discussion and Conclusions

In our analysis of experiment results, for the first group that completed the entire course, we found strong correlations between the time (in minutes) spent using video to communicate with peers, and every outcome of interest as shown in Table 1. Particularly strong correlations were found between students’ final grades and attainment of course certificates, which has significant commercial implications. Historically, open online courses have had very low completion rates around 5% (Lederman 2019); hence the use of novel tools within online courses can increase engagement, raise the rates of persistence, and improve outcomes. Because this is an observational design, these results do not establish a causal relationship between usage of the Riff Platform and these outcomes, but they suggest that such a relationship may exist. We found that the relationship between video usage and both of our response variables (grades and course completion) was well fit by a logistic curve, as shown in Table 2 and Figure 7. The upward pattern observed in Figures 6–7 simply indicates that the participants are more likely to achieve higher grades and earn a certificate of completion if they more frequently use Riff Video chat.

The reported odds ratios as extracted (reported in Table 2) from the coefficients of the logistic regression model fit suggest results that are statistically significant. The reported values correspond to increases in the odds of a student receiving higher grades by a factor of 23% for each additional hour spent using Riff Video chat and in the odds of passing the course by 35%. The analysis suggests that potential improvements in outcomes are realized with a total exposure of no more than approximately 15 hours over the course.

With the second group of students who remained enrolled during at least the first quarter of the course, the results summarized in Tables 3 and 4 demonstrate that early usage of Riff Platform has an even stronger relationship to the outcomes of interest than usage over the course as a whole. Particularly notable is the effect size for each additional hour of exposure to Riff Video during the first half of the course, which is associated with a doubling of the odds that the student completes the course and earns a certificate. Although this is an observational experiment, again, it suggests that the Riff Platform may be having a significant impact on students’ course completion rates and participation. Similar to Figures 6 and 7, we also observe an upward pattern in Figures 8 and 9, which indicates that this second group of students are also more likely to achieve higher grades in the course and earn a certificate of completion if they more frequently use video. We further note that the high positive correlation values relating measured metrics to user performance in the course are significantly better than those reported in the literature by Kim et al. (2008).

These results suggest that the platform and its nudging mechanisms ultimately benefit student learning as assessed via several exercises completed by the students. The high correlations described above suggest that there is a statistically significant difference in student performance when nudging features such as the MM are used versus not. We also contend that the Riff Platform helps students with self and situational awareness, as it presents the student with live as well as post-session information about their individual and group performance. This helps students adjust themselves in terms of balancing active contribution versus listening to others, and their level of engagement, via the real time feedback loop, when the course requires engagement to meet the learning goals.

These experiments and the corresponding results, though they signal strong correlations between the usage of Riff Platform and all measured output variables, are not without limitations. The results are based only on this set of experiments with this cohort of students in a single course. One could repeat these experiments in a variety of settings (using different cohorts, different countries, and different learning environments and tasks) to see if the results can be generalized. Secondly, we did not have any control variables (such as age, gender, job experience or other variables like prior online learning experience or performance in similar online or offline courses) on the subjects for whom we collected the data. It is possible that students who are successful using the Riff Platform would have been successful anyway because they have prior experience using online learning platforms or they are already high-performing students regardless of the learning environment or tools. Or some students may afford extra time outside the class to review and digest the material, explore additional resources and/or complete additional exercises while others may not. These effects should definitely be included in a more detailed future experimental study. This line of research would highly benefit from an A/B testing approach (e.g., using randomized controlled trials) where one would control for various factors. This requires a different learning environment to be set up where participants are not disadvantaged due to the parameters of the experiment in a legitimate online course where they expect to be provided with equal opportunities for learning.

Another limitation is the fact that the platform only keeps track of audio signals, not the actual content of what is spoken, and it classifies them into three categories as influences, interruptions or affirmations. It is possible that these classifications may not always be accurate, despite the rules we apply, due to backchannel noise. Future work could refine current techniques for identifying the three conversation events with additional hand-labeling, which could then be used to confirm the validity of the rules currently used in the platform.

To capture content being exchanged by participants, there is further opportunity to process spoken words using Natural Language Processing libraries and collect information on the scope and “quality” of the content (e.g., new topics or ideas introduced; sentiment of the words used). The platform could then reflect not only on the frequency or duration of one’s verbal participation, but also its quality.

One may also be concerned about the cost of using such a platform or integrating it with an existing learning management system (LMS). We note that because the platform readily comes as a complete platform solution, there is no additional cost of integrating it with existing LMS. Users simply login and use the features of the platform as needed. Some non-financial “costs” may exist, but they are limited to time and effort needed for training on how to use the system and understand its features such as the MM and Riff Metrics. We believe these costs are quite reasonable as the user interface has been designed with usability in mind.

Finally, though this experiment was within a course for adults interested in expanding their understanding of AI, the participants came from a variety of backgrounds including undergraduate and graduate students, early and mid-career professionals, and government employees. We see potential for this technology to address engagement in virtual settings for any type of collaborative work and any age group. (In all cases, it is important for people meeting in small groups to have an appropriate motivation for collaborating, such as brainstorming or problem solving.)

Even with the limitations noted, our results are a solid first step towards showing that online learning platforms with the right user-facing components providing relevant and prompt feedback to participants are indeed likely to be effective in enhancing the learning experience. This is important especially in the wake of growing demand for online learning platforms due to the COVID-19 pandemic. We believe tools like the Riff Platform and the Meeting Mediator will play an increasingly important role in educational and professional settings where online platforms foster more and more interactions.

Bibliography

Allen, Isabel Elaine, and Jeff Seaman. 2015. Tracking Online Education in the United States. Babson Survey Research Group. https://onlinelearningconsortium.org/read/online-report-card-tracking-online-education-united-states-2015/

Bos, Nathan, Judy Olson, Darren Gergle, Gary Olson, and Zach Wright. 2002. “Effects of Four Computer-Mediated Communications Channels on Trust Development.” Proceedings of CHI ‘02, SIGCHI Conference on Human Factors in Computing Systems: 135–40.

Bryson, Colin. 2016. Review of Engagement through Partnership: Students as Partners in Learning and Teaching in Higher Education, by Mick Healey, Abbi Flint and Kathy Harrington. International Journal for Academic Development 21, no. 1: 84–86.

Dong, Wen, and Alex “Sandy” Pentland. 2010. “Quantifying Group Problem Solving with Stochastic Analysis.” International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI ’10), Article 40: 1–4.

Dong, Wen, Bruno Lepri, and Alex “Sandy” Pentland. 2012a. “Automatic Prediction of Small Group Performance in Information Sharing Tasks.” Proceedings of Collective Intelligence Conference, arXiv: 1204.2991.

Dong, Wen, Bruno Lepri, Taemie Kim, Fabio Pianesi, and Alex “Sandy” Pentland. 2012b. “Modeling Conversational Dynamics and Performance in a Social Dilemma Task.” Proceedings of the 5th International Symposium on Communications Control and Signal Processing, 1–4.

Dong, Wen, Bruno Lepri, Fabio Pianesi, and Alex “Sandy” Pentland. 2013. “Modeling Functional Roles Dynamics in Small Group Interactions.” IEEE Transactions on Multimedia 15, no. 1: 83–95.

Garrison, Donn Randy, and Martha Cleveland-Innes. 2005. “Facilitating Cognitive Presence in Online Learning: Interaction Is Not Enough.” American Journal of Distance Education 19, no. 3: 133–148.

Greene, Jeffrey A., Christopher A. Oswald, and Jeffrey Pomerantz. 2015. “Predictors of Retention and Achievement in a Massive Open Online Course.” American Educational Research Journal 52, no. 5: 925–955.

Jayagopi, Dinesh Babu, Taemie Kim, Alex “Sandy” Pentland, and Daniel Gatica-Perez. 2010. “Recognizing Conversational Context in Group Interaction using Privacy-Sensitive Mobile Sensors.” Proceedings of the 9th International Conference on Mobile and Ubiquitous Multimedia (MUM ’10), Article 8: 1–4.

Johnson, Genevieve Marie. 2006. “Synchronous and Asynchronous Text-Based CMC in Educational Contexts: A Review of Recent Research.” TechTrends 50, no. 4: 46–53.

Kim, Taemie, Agnes Chang, Lindsey Holland, and Alex “Sandy” Pentland. 2008. “Meeting Mediator: Enhancing Group Collaboration using Sociometric Feedback.” Proceedings of the 2008 ACM conference on Computer Supported Cooperative Work: 457–466.

Kizilcec, René F., and Sherif Halawa. 2015. “Attrition and Achievement Gaps in Online Learning.” Proceedings of the Second (2015) ACM Conference on Learning @ Scale – L@S ’15: 57–66.

Kuyath, Stephen James. 2008. “The Social Presence of Instant Messaging: Effects on Student Satisfaction, Perceived Learning, and Performance in Distance Education.” PhD diss., The University of North Carolina at Charlotte.

Lederman, Doug. 2019. “Why MOOCs Didn’t Work, in 3 Data Points.” Inside HigherEd, Jan. 16, 2019. Accessed March 15, 2021. https://www.insidehighered.com/digital-learning/article/2019/01/16/study-offers-data-show-moocs-didnt-achieve-their-goals.

Pentland, Alex “Sandy,” Anmol Madan, and Jon Gips. 2006. “Perception of Social Interest: A Computational Model of Social Signaling.” Proceedings of the 18th International Conference on Pattern Recognition: 1080–1083.

Richardson, Jennifer C., and Karen Swan. 2003. “Examining Social Presence in Online Courses in Relation to Students’ Perceived Learning and Satisfaction.” Journal of Asynchronous Learning Network 7, no. 1: 68–88.

Riff Analytics. 2021. “Riff Learning Accessibility Conformance Report.” Accessed March 15, 2021. https://riffanalytics.ai/riff-edu-vpats-3/.

Sun, Huaiyi “Cici,” Dan Calacci, and Alex “Sandy” Pentland. 2018. “Patterns of Interaction Predict Social Role in Online Learning.” 2018 International Conference on Social Computing, Behavioral-Cultural Modeling & Prediction and Behavior Representation in Modeling and Simulation. Accessed April 21, 2021. http://sbp-brims.org/2018/proceedings/papers/latebreaking_papers/LB_11.pdf.

Woolley, Anita Williams, Christopher F. Chabris, Alex ‘Sandy’ Pentland, Nada Hashmi, and Thomas W. Malone. 2010. “Evidence for a Collective Intelligence Factor in the Performance of Human Groups.” Science 330, no. 6004: 686–688.

World Wide Web Consortium. 2021. “Web Content Accessibility Guidelines (WCAG) 2 Level AA Conformance.” Accessed March 15, 2021. https://www.w3.org/WAI/WCAG2AA-Conformance.

Yamada, Masanori. 2009. “The Role of Social Presence in Learner-Centered Communicative Language Learning using Synchronous Computer-Mediated Communication: Experimental Study.” Computers & Education 52, no. 4: 820–833.

Appendix A—Exit Survey Questions and Key Results

A1. Exit Survey Questions

  1. How well did you understand the lessons at the start of the course?
  2. How well did you understand the subject matter at the midpoint of the course?
  3. How well did you understand the subject matter at the end of the course?
  4. How satisfied were you with what you learned throughout the course?
  5. How well did the instructional materials convey course expectations?
  6. How well did course assignments test your understanding of course materials?
  7. How effectively did the activities and assignments used in this course aid your learning?
  8. Would you recommend this course to other students?
  9. How well did course staff support your learning throughout the course?
  10. How well were your technical questions addressed?
  11. How well were your questions about the coding assignments addressed?
  12. How well were your questions about the business assignments addressed?
  13. How effectively did course staff direct you toward resources?
  14. How well did the course staff support you overall?
  15. How useful did you find the real-time feedback during video calls?
  16. How clear was the real-time feedback during video calls?
  17. Thinking about the video calls for your Peer Learning Group and your Capstone team, to what degree do you agree with the following statements?
  • The feedback caused me to participate more.
  • The feedback caused me to give other team members more of an opportunity to speak.
  • The feedback caused me to participate less.
  • The feedback caused the other team members to talk more.
  • The feedback caused the other team members to give other team members more opportunities to speak.
  • The feedback caused the other team members to talk less.
  • On average, approximately what percentage of time were ​you​ speaking during meetings?
  • How would you describe how the conversations played out?
  • After using the video tool and seeing the real-time feedback, to what degree do you agree with the following statements?
    • I found this to be a useful tool.
    • I will try to talk more in similar activities in the future.
    • I will try to make sure other people have an equal opportunity to talk.
    • I will try to talk less in similar activities in the future.
    • I would like to use this tool in the future (for learning or work).
  • Do you have any suggestions for how we can improve the real-time feedback?
  • How useful was the Riff Stats feedback?
  • How easy was it to interpret the Riff Stats feedback?
  • How did Riff Stats feedback change your behavior?
  • How useful was the Riff messaging feature (the channels where you connected with other team members)?
  • Considering all the aspects of the Riff Platform (video calling, messaging, real-time feedback, and post-meeting feedback) to what degree do you agree with these statements?
    • The Riff Platform helped me connect with my team members.
    • The Riff Platform helped me connect with other people in the course.
    • The Riff Platform helped me connect with course staff.
    • The Riff Platform enabled my team to work effectively together.
    • The Riff Platform enabled me to be more successful in the course.
  • The Riff Platform was easy to use.
  • The Riff Platform did not present technical challenges.
  • The course environment was easy to use.
  • The course environment did not present technical challenges.
  • The coding environment was easy to use.
  • The coding environment did not present technical challenges.
  • I found it easy to navigate between the course environment, the coding environment, and the Riff Platform.
  • I was able to use all the tools in the course without any difficulty.
  • A2. Key Results

    General

    73% of course close survey respondents would recommend the course to a friend
    89% reported being Satisfied or Very Satisfied with what they learned in the course
    63% felt that activities and assignments aided their learning
    92% felt well or very well supported by course staff and mentors
    Net Promoter Score (NPS) = 62

    Collaborative exercises

    85% of survey respondents reported that the exercises helped them connect with peers
    65% of survey respondents reported that the exercises aided their business understanding

    Appendix B—Definitions

    Correlations: A correlation value quantifies the relationship between the values of two numeric variables. It ranges between -1 and +1, indicating negative and positive relationships, respectively. A correlation of 0 (or near-zero) means there is no discernable relationship between the two outcomes that are measured. As an example, if the correlation between Riff Video usage and passing a course is 0.70, that means the more a participant uses Riff Video, the more likely it is that this person will pass the course.

    Odds Ratios: The odds ratio ranges from 0 to infinitely large. A ratio above 1 means that the odds improve when the change is made. A ratio below 1 means the odds get worse.
    A ratio of 1 means the odds didn’t change at all. As an example, if the odds of passing the course for a student who never used the video were 1 to 5, and the odds of passing among students who used the video exactly once were 3 to 7, then the odds ratio associated with exposing a student to one additional video call is [(3/7):(1/5)] = 15/7, which is a little over 2. This would mean that the odds of the student passing the course roughly double when they are exposed to one additional video call. Notice that this is not the same as saying they are twice as likely to pass the course. It says the odds got twice as good.

    Significance: The “p-value” is used to quantify the significance of a relationship. The smaller the p-value, the stronger the evidence that some null hypothesis (stating that a relationship between two variables does not exist) can be rejected. Typical threshold p-values (to reject a null hypothesis) are 0.001, 0.01 and 0.05. The strength of the relationship can be determined based on which threshold the p-value falls under.

    Acknowledgments

    This research is based upon work supported by the National Science Foundation under Grant No. 1843391, SBIR Phase I: Positive Effects of Feedback and Intervention for Engagement in Online Learning.

    About the Authors

    Beth Porter is the CEO of Riff Analytics. Formerly, Beth was the VP of technical platform at Pearson Education and VP of product engineering at edX. Beth has led research and development of products that seek to transform online teaching and learning, including driving the Open edX initiative at edX and architecting the original Texas OnCourse program. Her engagements in higher education include lecturer at Boston University Questrom School of Business.

    John A. Doucette is a research scientist. He was an employee of Riff Analytics at the time this research was conducted. Prior to working at Riff Analytics, John was an assistant professor at New College of Florida. He holds degrees from the University of Waterloo, and Dalhousie University.

    Andrew Reilly is a data scientist at Riff Analytics, where he drives data-centric product development. He is currently a student at New College of Florida completing a Master’s degree in Data Science, and holds a Bachelor’s degree in Psychology from Central Washington University. Prior to working with Riff, he worked as a graduate intern providing business analysis to the Economic Development Corporation of Sarasota County.

    Dan Calacci is a PhD student at the MIT Media Lab studying how data and algorithms impact community behavior and governance. Their recent work involves studying how new sources of data and its stewardship can help communities and worker collectives advocate for a more just future. Their artwork and research have been exhibited and presented around the globe.

    Burcin Bozkaya is a professor and director of data science at New College of Florida. Prior to joining New College, he worked as a professor of business analytics and director of Behavioral Analytics and Visualization Lab at Sabanci University and a visiting professor at MIT Media Lab. His main research interests include behavioral analytics with big data, computational social science, spatio-temporal modeling and analysis and vehicle routing and logistics.

    Alex Pentland, professor at MIT, directs the MIT Connection Science initiative, and previously co-created the MIT Media Lab. He is one of the most-cited computational scientists in the world, member of the National Academy of Engineering, and co-leads the IEEE Council for Extended Intelligence. He is a co-founder of over a dozen companies, including Riff Analytics.

    Attribution-NonCommercial-ShareAlike 4.0 International

    This entry is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.

    Annotate

    Forum on Teaching in the Time of the COVID-19 Pandemic
    Powered by Manifold Scholarship. Learn more at
    Opens in new tab or windowmanifoldapp.org