Email, Skype, or Facebook: The Effects of the Communication Tool Choices on Team Dynamics and Performance in GVTs
The rapid development of online communication and collaboration tools offers a variety of choices to the GVT members. The members can rely on text-only communication channels such as email, add voice using telephone of VOIP tools, or use full audio and video tools like Skype. Additionally, a variety of instant messenger, document sharing and co-editing and other tools are available to aid online collaboration.
The present study seeks to address the following research questions:
Q1: Does the team’s choice of communication channel and collaboration tools affect team dynamics and performance?
Q2: Is the effect (relationship) moderated by team characteristics, communication frequency, etc.?
Q3: If there is an effect, how exactly does it work (mechanism)?
Q0: What predicts team’s choice of the communication/collaboration tool? (This can be a “pre-question” in the main study, or it can be a separate question in the main study, or maybe a question for a whole new study. It can also be asked first, or last. If we eventually decide to use a SEM, it may also be one of the components in the larger model.
If the team’s choice of the communication/collaboration tools affects team performance, it is useful to know why these differences occur. Are more experienced teams more likely to use more sophisticated online collaboration tools? Or is it more about team size? Geographic dispersion? Or something else? The answer would have implications for team member selection and team composition, as well as for team training and development.
Also, we train people in how to use various online collaboration tools. If we find that more sophisticated collaboration tools indeed improve team dynamics and performance, it’d be important to see if the training helped here. We can check it by testing the predictive power of the readiness test scores on the online collaboration tool choice.
Threat to validity: Causality.
It will not be certain if the teams that choose more sophisticated online collaboration tools are “better” (perform better, have fewer conflicts) or if “better” teams are more likely to choose “better” communication tools. The only way to test it is via random manipulation of the communication tool, which we don’t do. We allow teams choose the communication tool they prefer. Something for the “limitations and future research” section. Though think if you can find a way to perhaps establish causality here.
Significance and implications of the studies:
If it turns out the communication and collaboration tools affect team dynamics, this would have important implications for team management and training.
Communication / Collaboration Tool Hierarchy
- Media richness:
- Poor and delayed: Text only, delayed reply – email, Facebook group
- Poor but instant: Text only, instant reply – instant messages via Facebook chat, Viber, WhatsApp, Skype text only
- Medium: Voice: Viber, Skype w/o video, phone
- Rich: Voice and Video: Skype, Facetime, etc.
- Collaboration sophistication:
- Poor: Sequential collaboration, multiple copies: email attachments
- Medium: One copy, but sequential operation, easy review of changes: Dropbox, Basecamp, Track Changes in Word
- Rich: Simultaneous collaboration, single copy, voice support, easy review: Google Docs
Controls, possibly mediators, or maybe even predictors in their own right:
- Communications frequency: how often
- Communication duration: how long (minutes)
- Variety: used only one or multiple channels
Plus, Antecedents of the Team’s Choice/Preference for Communication Tools
- Training: those who did better on the readiness test, are they more likely to choose “better” communication tools?
- Prior experience?
If we really want to get sophisticated here: Asymmetries.
- We ask each team member individually how frequently/how much they communicated and what tools they used. Presumably, each team member tells us the same story. However, the reality is such that different team members “see” or “remember” things differently. These asymmetries in perceptions of team experiences could be an interesting predictor, outcome or moderator.
- It may be a good idea to dig deeper and see if, for example, teams where different team members paint a different picture are different in terms of their team dynamics and performance. If some team members say we commutated daily and for a long time, and others say we didn’t communicate much, if there is this gap in the report, does that mean anything? Could it be indicative of a different level of team member involvement and conflicts in the team? Anything else?
- Do team members who remember communicating “less/more perform better/worse?
- Do team members who remember mainly using text-based communication tools (email) perform better/worse than those who remember more rich communication (video)?
Let’s do some more data mining and exploration to identify the most interesting/publishable stories.
Here are the most promising venues I see (feel free to wonder in other directions, too):
- Communication and Free-Riding: Does the frequency/duration/mode of communication affect free-riding/social loafing/effort/motivation in GVTs?
This would be a separate paper, but also fits tightly with the research stream I am working on. This was not part of the original proposal, but it would be a great addition.
Could you please make this one your priority?
- Channel media richness and performance: Do teams that use rich channels (video, audio) perform better that use media poor channels (text only)?
This would be a separate paper that ties into a huge body of literature on the communication richness. This topic has been around for decades. Presumably, more media rich channels (video-audio) carry more information and as a result improve understanding, idea exchange, and as a result quality of work.
The beauty of it is that we don’t have to worry about the support/no support for the hypothesis affecting the chances of publication.
If we confirm that indeed teams that use more media-rich communication channels perform better, our paper is publishable b/c we confirmed yet again that the hypothesis is correct, but our dataset is the largest so far and we confirmed this in the context of GVTs.
But if we find no effect, still very good. We show that this thing everybody believed in may be not important after all, at least not in the GVT context with the younger population.
- More fine-grained look: I may be a good idea to go over more time over all possible communication variables (frequency, duration, type of the channel, etc.) and outcomes:
- Effort/Helpfulness: the first dimension of the peer evaluations
- Friendliness/Positive attitude: the last dimension of the peer evaluations
- Motivation: there is a weekly survey question that says “How motivated are you to continue working with your team?” – this one is particularly relevant to the peer evaluations study, in addition to the peer evaluations. “a” tells us what others thing about your motivation and effort, “c” shows what you personally feel.
- Confidence: same as “c”, but the question is “How confident are you that your team can accomplish the task on time and at good quality?”
- Productivity: “How active was your team at the different time periods of the project?” – could be interesting to either use an overall average, or better yet weekly productivity indicators to see if teams with some tools/communication frequency are more active early. We literally could take your 3 types of teams (FB, Email, Other) and plot for each type the “activity chart” (active in weeks 1-2, 3-4, 5-6, 7-8). It’d be interesting to see how those charts differ for the different types of teams.
- Conflicts: This can be taken from the “Challenges” part of the data, but we also had a question every week “How many unpleasant and conflict situations have you experienced this week?” (we can take the overall average, or plot these data for each type of communication over time to see of the dynamic profiles of the teams vary here).
- Satisfaction: with the team in general, with the project, with the work completed by the team
- Quality of the report
- Prejudice: We have two questions there (How similar/different do you feel the cultures represented on your team are? How easy/difficult do you think it is to work with the people from the countries represented on your team?). We found in a different study that people tend to reduce their prejudice from before to after the project. The differences they perceive at the beginning of the project get smaller towards the end of the project. People don’t see themselves as different at the end, as they saw in the beginning. An important question is whether or not the communication frequency/tools affect those inter-cultural difference/collaboration difficulty perceptions? We could check in general (e.g., if those who communicate more, or use FB vs. email) are seeing more differences/difficulties? And we could see if the before-to-after drop in prejudice is moderated by communication (the drop in prejudice is greater when there is more communication). In fact, I won’t be surprised, that too little communication (like extreme cases of almost no communication) will result in actually increased prejudice, while much communication will result in a big drop in prejudice from before to after.
- Collaboration vs. Communication: It is important to separate “collaboration” into a separate block. Dropbox/Google Docs/Base Camp vs. just email attachments. This is distinct from “communication”. This is a look at the effects of these new tools. This could be a separate paper.
- Semester: Given the substantial work design differences across semesters, it would be a very good idea to conduct these tests separately for different semesters. The design of the project in the different semesters can greatly affect our outcomes (see #3). So it makes sense to analyze those data separately.
- Satisfaction (can be measured at both individual and team levels):
- team commitment and identification
- team member satisfaction,
- peer evaluations overall
- peer evaluations “friendliness and collegiality”
- teach chemistry (it’s a tricky variable and can be a mediator actually; this is if the team members talk about things other than the task itself; the work is more “human”, more personal).
- peer evals overall
- peer evaluations in the part of “effort”
- Quality of the report
- Plagiarism (similarity rate)
- Progress (we can use the difference b/w the # of pages in the draft vs. final report to see how much was done at the time drafts were due)
- Team productivity (how active was the team at the different point in time). This one is tricky, b/c technically we’re measuring perceptions within one team. So the reference points are the different time periods for one team, not the different teams. Still may be worth checking.
- Output rate: we now ask the teams to submit weekly deliverables (starting 2016). How much they submit every week may be a good indicator of output rate. We can literally count the number of words and see if some communication channels lead to more output.
Level of analysis:
- It may be a good idea to do all analyses at the team level. After all, communication is an inherently team-level construct
- However, we have a bunch of individual level outcomes, such as personal satisfaction, team identification and commitment, satisfaction with the project, individual effort, etc. Thus, HLM may be more suitable for some analyses
RQ1: Does the choice of media-poor vs. media-rich communication channels and more advanced collaboration tools affect team dynamics and performance, and if so, how?
- More rich media channel > more chemistry, less conflict, greater effort, better performance, more satisfaction (better “social” side and eventually performance)
- More sophisticated collaboration > more productive, faster performance, better quality (better performance)
- Richness > frequency and duration. Does the channel determine frequency and duration of communication? Presumably more rich channels are more personal, so people enjoy it more and thus communicate more.
More sophisticated collaboration > less plagiarism (presumably check each other’s work and don’t let plagiarized work to sneak in)
More sophisticated collaboration > less spread in quality of the individual report quality evaluations (presumably check each other’s work and won’t let poor quality chapters to sneak in)
More sophisticated collaboration allows to be more informed about the true quality of work, so when we look at self-evaluations (both self evaluations in peer evaluations, but also self-evaluations of report quality) we’ll see less within team variance, presumably because people known better what is really going on, so their evaluations are more accurate and representative of the truth.
There may be even less difference between peer evaluations vs. self-evaluations as more sophisticated collaboration tools make it easier to see who really does what, so there is less gap between perception and truth.
This last block may even develop into a separate paper.
RQ2: Are the effects moderated by the frequency of the tool use, team size, or other team characteristics?
- Is frequency and duration moderate the relationship? Probably the greater the frequency and duration, the stronger the effect of the communication channel on outcomes. Though we may see otherwise: maybe more rich channels make it unnecessary to community frequently
- Do these effects depend on the team size, diversity and time zone dispersion? Presumably, the communication channel becomes really important when the team is larger and more diverse and dispersed.
- Is team’s cultural intelligence, emotional intelligence, and technical skills at play here?
RQ3: What is the mechanism through which the communication tool choice affects team dynamics and performance? (possible mediators: information exchange, informal communication when more media-rich communication channels are used, better meta-knowledge about other team members’ expertise, etc.).
- Channel > More team chemistry, fewer conflicts, more satisfaction, greater effort, etc. > performance
All kinds of combinations are possible here, but this is the basic idea.
Contact person: Dr. Vas Taras, email@example.com