Friday, April 18, 2014

Grounded Theory Building for the Workplace

My new published article is out, titled: Grounded Theory Building For The Workplace. This article is provided in the recent Performance Improvement Journal. Below you will find the abstract for this article, a diagram outlining the grounded theory building process, and the reference for the article.

Abstract:
Developing and utilizing theories in the workplace are critical for managers and practitioners to make sense of real-world issues. Grounded theory building is a viable research methodology that can be utilized in the workplace to help managers and practitioners develop theories, making better sense of workplace issues. This article looks at a general model of grounded theory building and introduces some of the key components involved in conducting grounded theory building.

The following diagram is provided in the published article. This diagram explains one process for the grounded theory building method that could be used by those in the workplace and by practitioners.



Reference:
Turner, J. R. (2014). Grounded theory building for the workplace. Performance Improvement, 53(3), 31-38. doi:10.1002/pfi.21401



Thursday, February 20, 2014

Team Conflict: Cognition Conflict as a New Construct

Attached is the link for my presentation slides introducing a new team conflict construct to the literature, cognition conflict. This presentation was made at the AHRD International Conference in the Americas, in Houston, TX. 

http://www.slideshare.net/JohnTurner5/turner-team-cognconflictpresentation

Friday, February 14, 2014

Emergent Constructs

In the literature you often find researchers measuring individuals and aggregating their scores to be analyzed at the group level. In organizational research, aggregating individual scores to a group or departmental level opens up countless opportunities of making better sense of the work place compared to traditional single-level research efforts. Practitioners could use this method to provided better analysis to their customers. This practice, in most cases, can be justified. However, improper aggregation can lead to a model being misspecified.  

Prior to aggregating data from a lower level to a higher level one needs to determine what type of emergence is this construct or variable emulating.  Emergence can best be thought of as a transformational process. The question to ask is, when aggregating a lower-level construct to a higher-level construct, does the characteristics or meaning of the data change? Emergence can be characterized by two qualitative types: composition and compilation (Kozlowski & Klein, 2000). Kozlowski and Klein (2000) describe composition as being isomporhic in which the lower-level phenomenon is essentially unchanged as it is aggregated to a higher-level phenomenon. Alternatively, compilation describes a phenomenon comprised of a “common domain but are distinctively different as they energy across levels” (Kozlowski & Klein, 2000, p. 16).

A simple example that can be utilized to help distinguish the difference between composition and compilation is the simple concept of classroom learning. For example, if you have a classroom of 10 grade-school children and you teach each individual student simple addition for the numbers 0 to 10, you would expect each student would learn how to add numbers from 0 to 10. By accessing the classroom’s average grade on a test of addition (0 - 10), without allowing any students to interact, you would expect to have a general sense of how much each student learned based on the classroom’s average grade. In this case, the individual learning reflects the classroom’s learning. This example reflects the emergence concept of composition, since the individual learning best represents the classroom’s average grade.

Alternatively, if you take the same 10 grade-school children and you only teach one student addition for the numbers of 0, then you teach the second student addition for the numbers of 1, and so forth. Then, allow the students to interact and share their experiences with what they have learned, then test the classroom, would you get a similar grade?  The average grade would reflect the individual student’s learning plus the learning from others through interaction with their classmates. Individual learning, in this case, does not reflect the classroom’s learning. The mediating factor in this case, or the catalyst, is the student’s interactions in which they were allowed to share their learning and experiences with one another. This example reflects compilation, where the individual-level (individual student learning) is similar but distinctively different from the higher level (classroom learning). This example is not to compare the effectiveness of the first example to the second example, it is only to compare the differences between composition and compilation.

Composition and compilation needs to be considered during the initial design of a research project, prior to collecting data. If your level of measurement is at a lower-level (e.g., individual level) and your level of analysis is at a higher-level (e.g., team level) than you need to utilize measures that meet a compositional emergence criteria. If you are utilizing compilation constructs then your level of measurement needs to be at the same level of analysis (e.g., team level and team level). Compilation constructs change meaning when they are aggregated which leads to a model that is misspecified.

Aggregation can be useful for both single-model and multi-model research. Careful planning of each construct and the level of measurement as well as the level of analysis needs to be considered. Klein and Kozlowski (2000) described the importance of a-priori planning: “Rigorous multilevel research rests… on the careful definition, justification, and explication of the level of each focal construct in the model” (p. 214). I would add that this applies for single-level research as well, especially when aggregated constructs are being used.

References:

Klein, K. J., & Kozlowski, S. W. J. (2000). From micro to mess: Critical steps in conceptualizing and conducting multilevel research. Organizational Research Methods, 3, 211-236. doi:10.1177/109442810033001

Kozlowski, S. W. J., & Klein, K. J. (2000). A multilevel approach to theory and research in organizations: Contextual, temporal, and emergent processes. In K. J. Klein & S. W. J. Kozlowski (Eds.), Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 3-90). San Francisco, CA: Jossey-Bass.


Saturday, December 21, 2013

Multilevel Units for Organizational Research - Beware of Misspecification Errors

Some common errors in organizational research include misspecification errors:
  • blind aggregation of individual-level measures to represent unit-level constructs,
  • use of unit-level measures to infer lower-level relations (the well-known problems of aggregation bias and ecological fallacies),
  • and use of informants who lack unique knowledge or experience to assess unit-level construct (Kozlowski & Klein, 2000).

In the past, organizational studies have primarily concentrated on single-level analysis. However, with the advancements in statistical software and techniques, conducting just a simple single-level analysis is becoming harder to justify. Single-level research studies are being replaced today with the more complex multilevel analysis techniques. In hierarchical systems, nested systems, such as in an organization, when a change is made in one part of the system each adjoining system is also effected, changing the whole system - the organization. By concentrating only on a single-level study, the researcher is ignoring the surrounding environment, the effect that the individual has on the group and organization, and alternatively, the effect that changes in the organization has on the team and on the individual.

Klein & Kozlowski (2000) highlighted the benefits of addressing organizational research using multilevel analysis as being able to better understand the complexity of the phenomenon that takes place across levels in organizations. 
"Organizations are hierarchically nested systems. To neglect these systems' structure in our conceptualization and research designs is to develop incomplete and misspecified models" (p. 232).

Misspecification occurs when measures taken at one level, say measures taken at the individual level, are used to make generalizations or inferences at a separate level, say at the team level. To begin with a properly specified model one needs to begin with the level of analysis that the researcher is interested in: "the outcome variable is measured at the lowest level of interest to the researcher" (Hofmann, Griffin, & Gavin, 2000, p. 489). The dependent variable(s) should be measured at the level the researcher is interested. Hence, if the researcher is interested in how team constructs effect individual team members then the dependent variable needs to be an individual measure. Resulting in a two-level study with the dependent variable at the individual level, measures representing the individual team members as level-1 measures, and team constructs represented as level-2 measures. Hypotheses can test any proposed interaction that may take place between levels. Klein and Kozlowski (2000) identified:
"Hypotheses in multilevel research are level-specific. Thus, hypotheses describe not simply the direction - positive or negative - of the relationship between constructs but also the level or levels of each predicted relationship: single, cross-level direct, cross-level moderating, or multilevel homologous" (p. 233).

Unit level constructs need to be clearly defined in the preliminary stages of specifying any model. Kozlowski and Klein (2000) identified unit-level constructs consisting of three basic types: global, shared, and configural unit properties. Global unit properties are those constructs that are measured at the unit level and do not originate at any lower level. Group size and group type are two examples identified as global units according to Kozlowski and Klein (2000). Shared unit properties are measures that originate at one level and can have a similar (isomorphic) meaning at the next level. Examples of shared unit properties include team performance (Kozlowski & Klein, 2000), team cohesion, team norms, team climate, and team mental models (Klein & Kozlowski, 2000). Individual performance, for example, can be aggregated to represent team performance, an isomorphic construct. Configural unit properties also originate at the lower lever, as in shared unit properties, but the upper level is dissimilar (non-isomorphic, or homology) to the lower level construct. Examples include diversity (Kozlowski & Klein, 2000), team personality composition, team interpersonal network density (Klein & Kozlowski, 2000) and team culture. Each of these constructs can take on different properties at the individual level when compared to the team level or organizational level. Configural unit properties cannot be aggregated, or summed, since they take on different meanings at different levels.

Each measure representing the constructs in the model needs to be identified with their unit properties correctly specified. Before aggregating a measure from the individual level to the team level, for example, a shared unit must be specified whereas a configural unit can not be aggregated, this would lead to model misspecification. Prior to aggregating shared units correct statistical procedures need to be followed. Klein and Kozlowski (2000) provide methods and guidelines for aggregating measures from one level to the next level. These guidelines include rwg, rwg(j), ICC(1), ICC(2), and WABA reliability measures. While no single reliability measure covers all possible scenarios, it is recommended that more than one reliability measure should be calculated. I typically prefer to calculate either rwg or rwg(j) followed by ICC(1) and ICC(2) calculations. More details on each of the reliability measures will be provided in future blog posts.

References:
Hofmann, D. A., Griffin, M. A., & Gavin, M. B. (2000). The application of hierarchical linear modeling to organizational research. In K. J. Klein & S. W. J. Kozlowski (Eds.), Multilevel theory, research, and mthods in organizations: Foundations, extensions, and new directions (pp. 467-511). San Francisco: Jossey-Bass.

Klein, K. J., & Kozlowski, S. W. J. (2000). From Micro to Meso: Critical Steps in Conceptualizing and Conducting Multilevel Research. Organizational Research Methods, 3(3), 211-236. doi:10.1177/109442810033001

Kozlowski, S. W. J., & Klein, K. J. (2000). A multilevel approach to theory and research in organizations: Contextual, temporal, and emergent processes. In K. J. Klein & S. W. J. Kozlowski (Eds.), Multilevel theory, research, and mthods in organizations: Foundations, extensions, and new directions (pp. 3-90). San Francisco: Jossey-Bass. 

Thursday, November 28, 2013

Team Cognition Conflict

I will be attending the 2014 AHRD International Conference in the Americas. This AHRD conference will take place in Houston, TX, from 2/19/2014 to 2/22/2014. 
I will be introducing a new construct to the literature on team conflict. The current literature identifies team conflict as being multidimensional, consisting of task, relationship, and process conflict (Behfar, Mannix, Peterson, & Trochim, 2011; Greer, Jehn, & Mannix, 2008; Jehn & Chatman, 2000; & Song, Dyer, & Thieme, 2006). Task conflict looks primarily at work related issues, relationship conflict looks at personal or social issues not relating to work, and process conflict relates to procedural issues.  
In one of my areas of interest/study, team cognition, there have been many advances in the literature identifying the different cognitive processes that take place in teams and small groups. From these advances I fell that the addition of a new construct, team cognition conflict, should be incorporated into the team conflict literature. The cognition conflict construct is a separate construct from those that have been previously identified in the literature, placing the team conflict constructs as having four main sub-dimensions: task, relationship, process, and cognition conflict. By further differentiating team conflict into better defined dimensions researchers will be able to clearly identify team conflict, providing better predictive measures for team performance and decision making abilities. This addition to the team conflict literature also responds to Song, Dyer, and Thieme's (2006) call for further research identifying different types of team conflict. 
The model presented below introduces the outline of the team conflict theoretical framework that will be presented in the AHRD conference in a roundtable format. 















(Turner, J. R., 2013, Figure 1)

References:
Behfar, K. J., Mannix, E. A., Peterson, R. S., & Trochim, W. M. (2011). Conflict in small groups: The meaning and consequences of process conflict. Small Group Research, 42, 127-176. doi:10.1177/1046496410389194
Greer, L. L., Jehn, K. A., & Mannix, E. A. (2008). Conflict transformation: A longitudinal investigation of the relationships between different types of intragroup conflict and the moderating role of conflict resolution. Small Group Research, 39, 278-302. doi:10.1177/1046496408317793
Jehn, K. A., & Chatman, J. A. (2000). The influence of proportional and perceptual conflict composition on team performance. The International Journal of Conflict Management, 11, 56-73. doi:10.1108/eb022835
Song, M., Dyer, B, & Thieme, J. R. (2006). Conflict management and innovation performance: an integrative contingency perspective. Journal of the Academy of Marketing Science, 34, 341-356. doi:10.1177/00092070306286705
Turner, J. R. (2014). Team cognition conflict: A conceptual review identifying cognition conflict as a new team conflict construct. Paper to be presented at the 2014 AHRD International Conference in the Americas, April 2014. [forthcoming presentation]

Team Shared Cognition Constructs - New Publication


Final approval for publishing my recent article, titled: "Team Shared Cognitive Constructs: A Meta-Analysis Exploring the Effects of Shared Cognitive Constructs on Team Performance" has just been received.
This has been a long process, from conference proceedings introducing meta-analysis techniques, to enduring the peer review process for, ultimately, final approval to publish. 
This article will be published by the flagship publication of the International Society of Performance Improvement (ISPI)Performance Improvement Quarterly (PIQ). The reference/bibliographical information is provided below (no volume, issue, or page numbers provided at this time):
Turner, J. R., Chen, Q., & Danks, S. (2014). Team shared cognitive constructs: A meta-analysis exploring the effects of shared cognitive constructs on team performance. Performance Improvement Quarterly. Manuscript submitted for publication.
These new emerging shared cognition constructs are beginning to be identified as being critical to the success of team and small group performance and problem solving efforts. More study is needed in these areas which was identified in the article.
In a previous post I presented the conference proceedings introducing the meta-analysis techniques used.
This post also introduced the presentation slides that were used during the conference:
The original presentation was designed for two purposes: 1) to introduce the emerging constructs of team shared cognition, and 2) to present the steps required to conduct a comparative meta-analysis study. In summary, the team shared cognition constructs that were prepared are provided in the table below, titled 'Shared Cognitive Constructs'.

In conclusion, the results from the meta-analysis are provided in the slide below, titled 'Conclusion'.

Limitations: 
As identified in the manuscript the sample size for this meta-analysis was small.  Having a small sample size prevented the possibility of making any type of inference(s) from the results. However, the main purpose of this study was to 1) identify the different constructs that were currently being studied in various disciplines, and 2) to run a comparison of these constructs to shed some light on which constructs resulted in better performance outcomes. With these shared cognition constructs being emerging constructs, meaning that they are new developing constructs, there is not a lot of research available to begin with. Thus, a secondary purpose of this research study was to call to researchers to contribute further to the research of these emerging constructs - beginning with those that were identified in this meta-analysis as being potentially better predictors of performance: information sharing, cognitive consensus, and shared metal memory.








Sunday, September 22, 2013

Why Theories Are Important



Theories are needed: 

"to satisfy a very human 'need' to order the experienced world. The only instrument employed in the ordering process is the human mind and the 'magic' of human perception and thought" (Dubin, 1978, p. 7)

A theory purpose is to either predict or explain the phenomenon being studied (Dubin, 1978; Creswell, 2014). Theories are conceptual models identifying the relationships between concepts, constructs, variables, and events, structured around a predefined set of boundaries (limitations). Jaccard and Jacoby (2010) reflect this in their definition of a theory: "an explanation of relationships among concepts or events within a set of boundary conditions" (p. 112). 

A theory remains a conceptual model up to the point that the researcher tests the theoretical model, at this point the theoretical model becomes a scientific model (Dubin, 1978). It is through testing theoretical models that the model is either accepted or rejected. Theoretical models are accepted when theories have been subjected to empirical testing and have been shown to be useful (Jaccard & Jacoby, 2010). Likewise, theoretical models are not accepted when theories have been subjected to empirical testing and have not been shown to be useful. A theoretical model is deemed as being valid through empirical testing, and is deemed as being useful or not useful (utility) by your peers in academia and by those in practice (consensual evaluation; Jaccard & Jacoby, 2010). To be considered scientific, Jaccard and Jacoby (2010) identified that theoretical models must achieve empirical verification or falsification. This is done through testing the theoretical model. 

Additionally, empirical research requires theoretical or conceptual models to identify the connections and relatedness of the variables being tested. The theoretical model provides the foundation for the hypothesis that are being tested in empirical research. The theoretical model also makes it easier for other scientist to replicate a study.

Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage.
Dubin, R. (1978). Theory building (Revised ed.). New York, NY: The Free Press.
Jaccard, J., & Jacoby, J. (2010). Theory construction and model-building skills: A practical guide for social scientists [Kindle]. Retrieved from Amazon.com
Related Posts Plugin for WordPress, Blogger...