VIA Character Strengths Test
I have taken the “VIA Character Strengths Test” for the first part of my journal. The assessment made a good overall impression, the questions were easy to understand, and the test was not excessively long. It did, however, only provide basic results for free and required payment for an advanced interpretation.
Free Aptitude Test for Strengths & Weaknesses
For the second part, I have chosen the “Free Aptitude Test for Strengths & Weaknesses,” which is available at this link: https://richardstep.com/richardstep-strengths-weaknesses-aptitude-test/free-aptitude-test-find-your-strengths-weaknesses-online-version/. The test does not require any personal information to receive results. However, it offered to watch a seemingly unrelated YouTube video in the middle of the assessment, so my impressions were not entirely positive.
The content of both tests was similar, the popular one had 84 questions, and the professional assessment had about the same number. The items themselves were comparable, even though they were phrased differently, the resemblance was visible. The administration format for both assessments was online-only, and each one took approximately 30 minutes to complete. The item response format was multiple choice in both tests.
When it comes to face validity, the professional assessment and the popular test have, once again, happened to be very close. Both appear to measure what they are meant to, therefore, their face validity is high. In regards to homogeneity and heterogeneity, the tests are both heterogeneous, as they measure multiple different traits in a broad general area.
Criterion-related validity is the area in which the popular test does not show great results. While it seemed similar to the professional assessment tool on the surface, its results are not even remotely close to those of the VIA test. In fact, after completing both tests using my genuine answers, I received completely different results: the strengths from the popular test did not match the VIA’s conclusions at all. Perhaps some of this discrepancy could be attributed to the fact that the sets of strengths measured by each test were different. One could also suppose that I might have received conflicting results because of the less clear phrasing of the questions in the popular test.
I have retaken both tests, giving slightly different answers, to see how the results would change. Because my answers were in the same general area, the test results remained similar, so the test-retest reliability seems adequate (Weir, 2005). Although it must be noted that my testing was not extensive, and further investigation could reveal different results.
The utility is the final and most crucial point of this evaluation. It is a measure of the test’s usefulness to the participant, and if it is low, the assessment loses its purpose as a professional tool (Wesnes et al., 2017). As expected, the utility of the popular test is not high, even though it provides a more detailed explanation of its results, with the results themselves being mostly invalid, this is of no use. What may be surprising is that the VIA test does not appear too useful either. The results it gives free of charge are just not enough to be valuable to whoever takes the test, and I did not get the chance to evaluate the paid features of the system. It is possible that the reports offered for sale by VIA are sufficiently detailed and can be used to make a serious career decision, but the basic results are not.
Weir, J. P. (2005). Quantifying test-retest reliability using the intraclass correlation coefficient and the SEM. The Journal of Strength & Conditioning Research, 19(1), 231-240.
Wesnes, K. A., Brooker, H., Ballard, C., McCambridge, L., Stenton, R., & Corbett, A. (2017). Utility, reliability, sensitivity and validity of an online test system designed to monitor changes in cognitive function in clinical trials. International journal of geriatric psychiatry, 32(12), e83-e92.