In the research process, certain types of claims and big validities are applied. According to Morling (2017), there are four key variations of big validities – internal, external, construct, and statistical, and three types of claims – frequency, casual, and association. One should take into account that for each of the claims, specific variants of validities are suitable. Morling (2017) states that for the frequency claim, statistical validity is appropriate, for the casual claim, internal and external validities are suitable, and for the association claim, the construct validity is optimal. At the same time, achieving all the four big validities is not a common phenomenon since different types of research are needed, which is hard to realize in one study.
Conceptual Variable and the Operational Definition of a Variable
The research process is often accompanied by the definition of variables in accordance with conceptual and operational characteristics. In the first case, as Rigdon (2016) notes, a specific state is implied, for instance, trust or appreciation. The operational definition of a variable, in turn, presupposes the availability of data about a specific state or phenomenon for the purpose of comparison. As an example, several conceptual variables can be analyzed – “affection,” “intelligence,” and “stress.” In the first case, fear and anxiety are valid conceptual variables. When applied to the “intelligence” aspect, the variables of education and cognitive skills may be applied. With regard to the conceptual variable of “stress,” frustration or aggression can be mentioned.
Ethical Research on Animals
Animal research is an acute ethical issue, and according to Archibald (2018), there is an algorithm that allows determining the degree of ethics – costs to animals, costs to humans, and the utility process as a background. In case there is bias against any of the parties involved, the research is unethical. With regard to the “Three R’s” rule, searching for alternatives is the most important task. Due to advances in biotechnology, the development of appropriate reagents and other artificial markers can be a potentially valuable solution. However, as Archibald (2018) notes, today, there is a tendency to prohibit engaging animals rather than seek alternative means. Equipping with the necessary technological base and toughening researchers’ responsibility can help solve this ethical problem.
Case Study Reflection
The student intern’s ethical responsibilities require complying with the confidentiality conditions due to the specific focus of his work. Duncan, Hall, and Knowles (2015) analyze individual cases and note that in addition to practical skills in working with psychological patients, ethics in interacting with the target audience is a significant aspect of professional activity. The considered situation is ambiguous, but when assessing the problem, the student intern can study the proposed files since he has received this order from his immediate supervisor. Personal feelings may probably be an additional motivator to violate the original confidentiality statement. However, the execution of the order is a priority decision, which, in addition, can satisfy the interest about the roommate’s strange behavior.
Reliability and Validity of Pop Psychology Tests
Popular psychological tests offer an entertaining opportunity to evaluate specific personal traits. However, the reliability and validity of such instruments are generally low. Sjöberg (2015) examines these tests and argues that to increase their credibility, evaluation criteria need to be justified to validate a specific measure. Any personality test may be taken as an example, and common questions can be analyzed. If such a tool were valid, it would not offer a one-to-five rating scale to test the frequency of interaction with other people but would allow selecting specific options for determining social relationships type. The reliability of this test would be higher if an assessment scale included flexible personality types rather than unambiguous estimates. In scholarly tests, there is less ambiguity, which, in turn, makes them more credible.
The critique of polling techniques used during elections is justified due to survey results’ generalization. According to Kenett, Pfeffermann, and Steinberg (2018) who study election polls and their disadvantages, predictive analytics is one of the common practices for evaluating results. In other words, based on few data collected, the grand total is summed up through statistical correlations. However, as the authors state, the information quality of such a poll is not high and valid due to biased data collection (Kenett et al., 2018). The larger the sample to collect is, the higher is the reliability of statistical correlations obtained. Consequently, the critique of the poll in question is objective.
External Validity of a Correlational Study
To assess the external validity of a correlation study, specific questions can be posed. Becker et al. (2016) suggest paying attention to the criterion of generalizability as an important aspect. The question to ask is as follows: how objective is the choice of a sample for a specific research? Becker et al. (2016) also considers the principle of association and argues that correlational studies’ outcomes may depend on specifically chosen criteria. In this regard, an appropriate question to ask is as follows: are given parameters for a study unique or identical results can be obtained with other variables? These questions allow evaluating the external validity of the correlational principle of research.
An accurate estimate of the number of cell-phone only households does not imply using a full sample method. This practice is potentially erroneous due to insufficient data accuracy and too a large target sample. Anderson, Kelley, and Maxwell (2017) highlight an alternative that eliminates bias. Pew Research may apply a strategy of adjusting the sample size based on specific criteria, in particular, age. As a sample, the active part of the population can be attracted since, according to the findings of the organization, the young audience are the key users of cell phones. This method can help reduce contacts with participants and, at the same time, interact with them productively via mobile communication.
A factorial design is a methodology that, as a rule, is used for two main reasons. Stepanova, Bartholow, Saults, and Friedman (2018) review this research practice and note that, firstly, it allows testing limits, and secondly, it helps testing specific theories and the parameter of generalizability. The authors also mention Bartholow and Heinz’s word association study on alcohol and thoughts of aggression as a reference, and this research can also be attributed to a factorial design (Stepanova et al., 2018). In the study, participants were randomly selected based on 2×2 conditions, which is a factorial method. This analysis of variance is a common example of the research technique in question.
Priority of Internal Validity over External Validity
Experimenters tend to favor internal validity over external validity when it is difficult to achieve both. Kenny (2019) explains this phenomenon by the fact that internal validity allows testing causality when the chosen sample cannot be random. External validity, as the author notes, makes it possible to generalize the results, which, in turn, allows utilizing a randomly selected sample (Kenny, 2019). In other words, by applying internal validity, researchers can rely on objective data and clear correlations between selected variables, while external validity generalizes rather than specifies the outcomes. Therefore, to obtain the most accurate and unbiased results possible, internal validity is more appropriate when external validity cannot be applied.
Inefficiency of Random Sampling in a Theory-Testing Mode
Researchers working in a theory-testing mode may not try using random sampling in their work. According to Salloum, Huang, and He (2019), if a test method is applied, a large volume of data needs to be analyzed. This, in turn, makes the random sampling method meaningless and ineffective since, in conditions of the excess of information, this sampling principle does not allow obtaining objective data. External validity, in this case, is optimal because it helps generalize the results and to specify them, which is difficult for working with a large sample. As a result, the generalizing aspects of the research process are more characteristic of a theory-testing mode without random sampling.
Cultural psychology is an area that studies important aspects of human interaction within individual cultures. As Stroebe, Gadenne, and Nijstad (2018) note, the goal of this discipline is to determine how the psychological processes of society are reflected through unique cultural characteristics. This field suggests using of specific validities and research methods in theory-testing and generalization modes. Stroebe et al. (2018) mentions external validity as a criterion that is optimal for theory testing and notes that the aspect of generalizability is usually not addressed in the context of standard psychological research. Cultural psychology makes it possible to apply appropriate research tools and obtain valuable data that are difficult to analyze and interpret within the framework of a standard study.
Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample-size planning for more accurate statistical power: A method adjusting sample effect sizes for publication bias and uncertainty. Psychological Science, 28(11), 1547-1562.
Archibald, K. (2018). Animal research is an ethical issue for humans as well as for animals. Journal of Animal Ethics, 8(1), 1-11.
Becker, T. E., Atinc, G., Breaugh, J. A., Carlson, K. D., Edwards, J. R., & Spector, P. E. (2016). Statistical control in correlational studies: 10 essential recommendations for organizational researchers. Journal of Organizational Behavior, 37(2), 157-167.
Duncan, R. E., Hall, A. C., & Knowles, A. (2015). Ethical dilemmas of confidentiality with adolescent clients: Case studies from psychologists. Ethics & Behavior, 25(3), 197-221.
Kenett, R. S., Pfeffermann, D., & Steinberg, D. M. (2018). Election polls – A survey, a critique, and proposals. Annual Review of Statistics and Its Application, 5, 1-24.
Kenny, D. A. (2019). Enhancing validity in psychological research. American Psychologist, 74(9), 1018-1028.
Morling, B. (2017). Research methods in psychology: Evaluating a world of information (3rd ed.). New York, NY: W. W. Norton & Company.
Rigdon, E. E. (2016). Choosing PLS path modeling as analytical method in European management research: A realist perspective. European Management Journal, 34(6), 598-605.
Salloum, S., Huang, J. Z., & He, Y. (2019). Random sample partition: A distributed data model for big data analysis. IEEE Transactions on Industrial Informatics, 15(11), 5846-5854.
Sjöberg, L. (2015). Correction for faking in self‐report personality tests. Scandinavian Journal of Psychology, 56(5), 582-591.
Stepanova, E. V., Bartholow, B. D., Saults, J. S., & Friedman, R. S. (2018). Effects of exposure to alcohol‐related cues on racial discrimination. European Journal of Social Psychology, 48(3), 380-387.
Stroebe, W., Gadenne, V., & Nijstad, B. A. (2018). Do our psychological laws apply only to college students? External validity revisited. Basic and Applied Social Psychology, 40(6), 384-395.