Tuesday, July 8, 2014

Scientific Value / Theoretical Contribution

When planning a research study or looking to develop a theory to help explain a particular phenomenon, new knowledge to a field is partially judged on the basis of the scientific value that the new research or theory provides to the field of study. The scientific value of a contribution is evaluated, according to Polanyi (2009), by three factors:
  • its exactitude
  • its systematic importance, and
  • the intrinsic interest of the subject matter (p. 66).

Exactitude relates to the accuracy of the contribution, often reflected in the type of methodology used. The analysis for a research study needs to be conducted using the proper statistical methodology along with presenting the correct assumptions for that particular methodology. Although theory is not a direct function of the exactitude factor, it is a requirement that provides the foundation for research in which statistical analyses are made possible.

Likewise, theory play more of a role in the later two factors. In the systematic importance factor, systematic relates to the constructs or variables that are used to define the components of the presented theory along with any interactions between those constructs and variables. This systematic importance factor helps provide a plausible explanation of how the parts of the theory combine into a composite (whole) theory that addresses the phenomenon or problem in question. 

Lastly, the intrinsic interest factor relates to the researcher addressing a new phenomenon or problem that a particular field of study is interested in pursuing. Addressing a known problem that has been researched previously and that has a number of potential solutions will not be too interesting to those in the field who are reviewing the article for acceptance, nor would it be very interesting to the readers in that field. By identifying a new or novel ideal, or addressing a new problem that the field of study is faced with, will be of more interest and value, not only to those reviewing the article but also to those reading the article once published. Capturing the audience is important (interest), identifying what is important to the field that you are proposing to (intrinsic) addresses the concerns of that field (intrinsic interest).


Polanyi, M. (2009). The tacit dimension. Chicago, IL: The University of Chicago Press.

Monday, June 30, 2014

Research or evaluation: Companies and institutions reluctant to take the step, or researchers unable to provide benefits to the company for participating

In academic research, some caution that research studies are biased toward adolescents and students, since the majority of studies include students as the subjects or participants. This point has been highlighted by Brookshire (2014): “Sixty-seven percent of American psychology studies use college students….This means that many or even most of the subjects are teenagers” (para 4). This bias toward students being research subjects is primarily due to the sample being “the epitome of a convenience sample, they have become the basis for what some critics call the science of the sophomore” (The Numbers Guy, para 4).

Granted, for the purposes of students learning how to conduct research, using fellow students as subjects for their research is a good pedagogic exercise. However, when researchers, or the readers of these research articles, try to infer the findings to other populations problems could occur. So the question arises: Why isn’t there more research conducted from samples in the workplace (other than students)?

Over the past six months I have been trying to find a company, or a few companies, to participate in my research study. This research study is for my dissertation. All that is required, from the participating company, is for the selected employees to complete a survey (online), expending approximately 20 to 25 minutes of their time. The data would provide the participating company with information on where knowledge sharing is taking place, as well as identify what barriers they have preventing knowledge sharing from occurring. In knowing what barriers a company is experiencing relating to knowledge sharing in teams/groups, training could easily be selected to address these issues. Successful completion of this training would result in better knowledge sharing across teams/groups, resulting in better decision making and problem solving processes for these teams/groups.

From my experience thus far, I have provided the following possible answers relating to why more research on companies/institutions are not being conducted from academia.
(a) One simple answer is that it is hard work and time consuming. (b) A second answer is that companies / institutions do not want to take the time to support external research projects. (c) And thirdly, companies do not want people from outside the company to have the ability to evaluate them.

(a) Yes some researchers may find it easier to sample college students since they are already available to the researcher. However, in the social science fields (i.e., psychology, sociology, human resources) there are populations other than those between the ages of 17 and 22 years. The Numbers Guy (2014) provided the following quote (from Prof. Nosek) in their article highlighting this same point: “‘The scientific reward structure does not benefit someone who puts in the enormous effort’ to create a representative research sample” (para 14). Perhaps this bias toward sampling students comes from companies/institutions resistance to participating in external studies, which brings us to the second point.

(b) Companies may not see the need, the benefit, or feel that they have the time to entertain external researchers so that these researchers can benefit themselves and not the companies who participate. Companies may feel that the researchers are pushing for personal gain rather than trying to benefit the company. Regardless of the companies perception of external research, companies should be more willing to review requests for research to see if there is any benefit that the company could gain from participating in the study. The researcher should provide a well presented list of the study and the benefits that the participating company could gain from their participation. By providing a list of benefits for the company, the researcher has a better chance for companies agreeing to participate in their study compared to not providing any benefits for participating. Offer to co-author the paper with representatives from the company. Some companies may wish to get noticed in the literature as much as the researcher does. However, other companies want to avoid getting noticed. In this case the researcher has to work on the final report with representatives from the company, editing the final report until the company feels that they are being protected. Then, and only then, can the researcher submit for publication. In either case, the researcher needs to work with the participating company when publishing data and results relating to their company.

(c) I requested, from an acquaintance, to collect data from the employees in which this particular person was in charge of. After reviewing the survey items (questions), this person declined for the following reasons: negative items in the questions and there were no issues at their institution. Rather than stating that they were not interested, they made excuses showing their lack of expertise in research methodologies. In either case, you have to respect their decision and move on to the next company. The point here is that the decision maker for this institution was not interested in evaluating their organization. By stating that they did not have any issues at their place of work with no evaluation measures to support such a statement, leads one to believe that the culture is, don’t measure what you don’t want to know. 

Overall, practitioner - researcher relationships need to be built upon to increase both the willingness for companies/institutions to allow for external research projects to be conducted and for researchers to provide real-world solutions that benefit the customer. Van de Ven (2007) calls this relationship engaged scholarship, referring to a “participated form of research” (p. 9). This participated form engages both the practitioner and the researcher to conduct research that is pragmatic and worthwhile to the organization as well as allowing the researcher to meet their needs of contributing new knowledge to their field of study through publication. 


Brookshire, B. (2014). Psychology is WEIRD. Retrieved from http://www.slate.com/articles/health_and_science/science/2013/05/weird_psychology_social_science_researchers_rely_too_much_on_western_college.html

Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44, 350-383. doi:10.2307/2667000

The Numbers Guy (August 10, 2014). Too many studies use college students as their guinea pigs. The Wall Street Journal. Retrieve from http://online.wsj.com/news/articles/SB118670089203393577


Van de Ven, A. H. (2007). Engaged scholarship, A guide for organizational and social research. New York, NY: Oxford University Press.

Saturday, May 3, 2014

Case Study Research: Making Sense in the Workplace

In keeping with the theme of conducting research in the workplace, making better sense of workplace issues, I have a new publication titled Case Study Research: A Valuable Learning Tool for Performance Improvement Professionals.  This publications provides general information relating to case study research that is available to managers and practitioners alike, and coincides with my previous publication on Grounded Theory Building. Both of these articles help provide tools for the manager and practitioner to make better sense of problems they may encounter in the workplace.

This article was published in Performance Improvement Journal (PI) with the support from one of my peers, S. Danks. The abstract for this article is provided below along with the APA reference for the article.

Abstract:

Although it is sometimes recommended that performance improvement (PI) professionals include experimental research designs in their repertoire of PI tools and methods, it has been long understood that experimental designs can be difficult to implement due to impediments resulting from the complex nature of the organizational settings. However, the utilization of case study research has proven to be an effective alternative to aid in the identification of strengths and opportunities for the improvement of organizational procedures, policies, processes, or programs. Case study research helps managers and practitioners make sense of real world problems. This article presents a summary of steps in the design of case study research and provides examples of how these methods have been used within organizational settings. Implications for PI practitioners are provided.

Turner, J. R., & Danks, S. (2014). Case study research: A valuable learning tool for performance improvement professionals. Performance Improvement, 53, 24-31. doi:10.1002/pfi.21406

Friday, April 18, 2014

Grounded Theory Building for the Workplace

My new published article is out, titled: Grounded Theory Building For The Workplace. This article is provided in the recent Performance Improvement Journal. Below you will find the abstract for this article, a diagram outlining the grounded theory building process, and the reference for the article.

Abstract:
Developing and utilizing theories in the workplace are critical for managers and practitioners to make sense of real-world issues. Grounded theory building is a viable research methodology that can be utilized in the workplace to help managers and practitioners develop theories, making better sense of workplace issues. This article looks at a general model of grounded theory building and introduces some of the key components involved in conducting grounded theory building.

The following diagram is provided in the published article. This diagram explains one process for the grounded theory building method that could be used by those in the workplace and by practitioners.



Reference:
Turner, J. R. (2014). Grounded theory building for the workplace. Performance Improvement, 53(3), 31-38. doi:10.1002/pfi.21401



Thursday, February 20, 2014

Team Conflict: Cognition Conflict as a New Construct

Attached is the link for my presentation slides introducing a new team conflict construct to the literature, cognition conflict. This presentation was made at the AHRD International Conference in the Americas, in Houston, TX. 

http://www.slideshare.net/JohnTurner5/turner-team-cognconflictpresentation

Friday, February 14, 2014

Emergent Constructs

In the literature you often find researchers measuring individuals and aggregating their scores to be analyzed at the group level. In organizational research, aggregating individual scores to a group or departmental level opens up countless opportunities of making better sense of the work place compared to traditional single-level research efforts. Practitioners could use this method to provided better analysis to their customers. This practice, in most cases, can be justified. However, improper aggregation can lead to a model being misspecified.  

Prior to aggregating data from a lower level to a higher level one needs to determine what type of emergence is this construct or variable emulating.  Emergence can best be thought of as a transformational process. The question to ask is, when aggregating a lower-level construct to a higher-level construct, does the characteristics or meaning of the data change? Emergence can be characterized by two qualitative types: composition and compilation (Kozlowski & Klein, 2000). Kozlowski and Klein (2000) describe composition as being isomporhic in which the lower-level phenomenon is essentially unchanged as it is aggregated to a higher-level phenomenon. Alternatively, compilation describes a phenomenon comprised of a “common domain but are distinctively different as they energy across levels” (Kozlowski & Klein, 2000, p. 16).

A simple example that can be utilized to help distinguish the difference between composition and compilation is the simple concept of classroom learning. For example, if you have a classroom of 10 grade-school children and you teach each individual student simple addition for the numbers 0 to 10, you would expect each student would learn how to add numbers from 0 to 10. By accessing the classroom’s average grade on a test of addition (0 - 10), without allowing any students to interact, you would expect to have a general sense of how much each student learned based on the classroom’s average grade. In this case, the individual learning reflects the classroom’s learning. This example reflects the emergence concept of composition, since the individual learning best represents the classroom’s average grade.

Alternatively, if you take the same 10 grade-school children and you only teach one student addition for the numbers of 0, then you teach the second student addition for the numbers of 1, and so forth. Then, allow the students to interact and share their experiences with what they have learned, then test the classroom, would you get a similar grade?  The average grade would reflect the individual student’s learning plus the learning from others through interaction with their classmates. Individual learning, in this case, does not reflect the classroom’s learning. The mediating factor in this case, or the catalyst, is the student’s interactions in which they were allowed to share their learning and experiences with one another. This example reflects compilation, where the individual-level (individual student learning) is similar but distinctively different from the higher level (classroom learning). This example is not to compare the effectiveness of the first example to the second example, it is only to compare the differences between composition and compilation.

Composition and compilation needs to be considered during the initial design of a research project, prior to collecting data. If your level of measurement is at a lower-level (e.g., individual level) and your level of analysis is at a higher-level (e.g., team level) than you need to utilize measures that meet a compositional emergence criteria. If you are utilizing compilation constructs then your level of measurement needs to be at the same level of analysis (e.g., team level and team level). Compilation constructs change meaning when they are aggregated which leads to a model that is misspecified.

Aggregation can be useful for both single-model and multi-model research. Careful planning of each construct and the level of measurement as well as the level of analysis needs to be considered. Klein and Kozlowski (2000) described the importance of a-priori planning: “Rigorous multilevel research rests… on the careful definition, justification, and explication of the level of each focal construct in the model” (p. 214). I would add that this applies for single-level research as well, especially when aggregated constructs are being used.

References:

Klein, K. J., & Kozlowski, S. W. J. (2000). From micro to mess: Critical steps in conceptualizing and conducting multilevel research. Organizational Research Methods, 3, 211-236. doi:10.1177/109442810033001

Kozlowski, S. W. J., & Klein, K. J. (2000). A multilevel approach to theory and research in organizations: Contextual, temporal, and emergent processes. In K. J. Klein & S. W. J. Kozlowski (Eds.), Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 3-90). San Francisco, CA: Jossey-Bass.


Saturday, December 21, 2013

Multilevel Units for Organizational Research - Beware of Misspecification Errors

Some common errors in organizational research include misspecification errors:
  • blind aggregation of individual-level measures to represent unit-level constructs,
  • use of unit-level measures to infer lower-level relations (the well-known problems of aggregation bias and ecological fallacies),
  • and use of informants who lack unique knowledge or experience to assess unit-level construct (Kozlowski & Klein, 2000).

In the past, organizational studies have primarily concentrated on single-level analysis. However, with the advancements in statistical software and techniques, conducting just a simple single-level analysis is becoming harder to justify. Single-level research studies are being replaced today with the more complex multilevel analysis techniques. In hierarchical systems, nested systems, such as in an organization, when a change is made in one part of the system each adjoining system is also effected, changing the whole system - the organization. By concentrating only on a single-level study, the researcher is ignoring the surrounding environment, the effect that the individual has on the group and organization, and alternatively, the effect that changes in the organization has on the team and on the individual.

Klein & Kozlowski (2000) highlighted the benefits of addressing organizational research using multilevel analysis as being able to better understand the complexity of the phenomenon that takes place across levels in organizations. 
"Organizations are hierarchically nested systems. To neglect these systems' structure in our conceptualization and research designs is to develop incomplete and misspecified models" (p. 232).

Misspecification occurs when measures taken at one level, say measures taken at the individual level, are used to make generalizations or inferences at a separate level, say at the team level. To begin with a properly specified model one needs to begin with the level of analysis that the researcher is interested in: "the outcome variable is measured at the lowest level of interest to the researcher" (Hofmann, Griffin, & Gavin, 2000, p. 489). The dependent variable(s) should be measured at the level the researcher is interested. Hence, if the researcher is interested in how team constructs effect individual team members then the dependent variable needs to be an individual measure. Resulting in a two-level study with the dependent variable at the individual level, measures representing the individual team members as level-1 measures, and team constructs represented as level-2 measures. Hypotheses can test any proposed interaction that may take place between levels. Klein and Kozlowski (2000) identified:
"Hypotheses in multilevel research are level-specific. Thus, hypotheses describe not simply the direction - positive or negative - of the relationship between constructs but also the level or levels of each predicted relationship: single, cross-level direct, cross-level moderating, or multilevel homologous" (p. 233).

Unit level constructs need to be clearly defined in the preliminary stages of specifying any model. Kozlowski and Klein (2000) identified unit-level constructs consisting of three basic types: global, shared, and configural unit properties. Global unit properties are those constructs that are measured at the unit level and do not originate at any lower level. Group size and group type are two examples identified as global units according to Kozlowski and Klein (2000). Shared unit properties are measures that originate at one level and can have a similar (isomorphic) meaning at the next level. Examples of shared unit properties include team performance (Kozlowski & Klein, 2000), team cohesion, team norms, team climate, and team mental models (Klein & Kozlowski, 2000). Individual performance, for example, can be aggregated to represent team performance, an isomorphic construct. Configural unit properties also originate at the lower lever, as in shared unit properties, but the upper level is dissimilar (non-isomorphic, or homology) to the lower level construct. Examples include diversity (Kozlowski & Klein, 2000), team personality composition, team interpersonal network density (Klein & Kozlowski, 2000) and team culture. Each of these constructs can take on different properties at the individual level when compared to the team level or organizational level. Configural unit properties cannot be aggregated, or summed, since they take on different meanings at different levels.

Each measure representing the constructs in the model needs to be identified with their unit properties correctly specified. Before aggregating a measure from the individual level to the team level, for example, a shared unit must be specified whereas a configural unit can not be aggregated, this would lead to model misspecification. Prior to aggregating shared units correct statistical procedures need to be followed. Klein and Kozlowski (2000) provide methods and guidelines for aggregating measures from one level to the next level. These guidelines include rwg, rwg(j), ICC(1), ICC(2), and WABA reliability measures. While no single reliability measure covers all possible scenarios, it is recommended that more than one reliability measure should be calculated. I typically prefer to calculate either rwg or rwg(j) followed by ICC(1) and ICC(2) calculations. More details on each of the reliability measures will be provided in future blog posts.

References:
Hofmann, D. A., Griffin, M. A., & Gavin, M. B. (2000). The application of hierarchical linear modeling to organizational research. In K. J. Klein & S. W. J. Kozlowski (Eds.), Multilevel theory, research, and mthods in organizations: Foundations, extensions, and new directions (pp. 467-511). San Francisco: Jossey-Bass.

Klein, K. J., & Kozlowski, S. W. J. (2000). From Micro to Meso: Critical Steps in Conceptualizing and Conducting Multilevel Research. Organizational Research Methods, 3(3), 211-236. doi:10.1177/109442810033001

Kozlowski, S. W. J., & Klein, K. J. (2000). A multilevel approach to theory and research in organizations: Contextual, temporal, and emergent processes. In K. J. Klein & S. W. J. Kozlowski (Eds.), Multilevel theory, research, and mthods in organizations: Foundations, extensions, and new directions (pp. 3-90). San Francisco: Jossey-Bass. 
Related Posts Plugin for WordPress, Blogger...