Main Article Content

Abstract

Facilitating access to online courses in higher education mobility programs is essential for creating a more interconnected educational ecosystem within the European Education Area. Federated e-infrastructures have emerged as effective solutions to enhance the interoperability, accessibility, and scalability of academic services under a standardized trust model. However, assessing their usability for end-users is critical. This study aims to identify and adapt an instrument for measuring the usability of federated access to a Moodle ecosystem implemented by the Transform4Europe alliance for students participating in mobility programs. The paper outlines the process of adapting and validating a questionnaire based on Nielsen’s Usability Attributes model to meet the unique characteristics of this context. An iterative, multi-method approach was employed, incorporating feedback from students and usability experts for content validation. The resulting instrument was administered to 145 students at the University of Trieste during lectures. Exploratory factor analysis confirmed the tool’s reliability and validity while highlighting the need for refinements, including revising two items with low factor loadings, methodological adjustments in questionnaire administration, and increased sample size for more robust results. Although further validation of the final instrument is recommended, the results obtained in this study provide a significant starting point for advancing usability assessment practices in federated learning environments aimed at enhancing the student mobility experience.

Keywords

European Universities e-Learning Federated Access Moodle Online Courses Usability

Article Details

Author Biographies

Federica Mancini, Università degli Studi di Trieste

Federica Mancini is a Research Fellow at the University of Trieste. She holds an M.S. in Science of Education and a Ph.D. in Information and Knowledge Society. She has previously worked as a Research Assistant and Project Manager in Italy and Spain, collaborating with various universities. Her research focuses on digital education, innovative pedagogical methodologies, and EU-funded projects recognized by the European Commission.

Riccardo Fattorini, Università degli Studi di Trieste

Riccardo Fattorini is an e-learning specialist in the ICT Area at the University of Trieste. He holds an M.S. in Psychology, a Ph.D. in Cognitive Psychology, and a specialization in Cognitive Psychotherapy. He has worked as a Research Fellow at multiple universities. His research includes digital education, the application of innovative pedagogical methodologies and technologies in online learning environments, and innovative teaching methodologies. He is also a Registered Psychologist in Italy.

Michele Bava, Università degli Studi di Trieste

Michele Bava is the Director of the ICT Area at the University of Trieste. He holds an M.S. in Electronic Engineering, a Ph.D. in Information Engineering, and a specialization in Clinical Engineering. He previously worked at the IRCCS “Burlo Garofolo” as Data Protection Officer and CISO. His research focuses on digital education, cybersecurity, and artificial intelligence, particularly in improving university services.

How to Cite
Mancini, F., Fattorini, R., & Bava, M. (2025). Assessing the Usability of Federated Access to T4EU Online Courses in Higher Education Mobility Programs. Journal of E-Learning and Knowledge Society, 21(2), 31-43. https://doi.org/10.20368/1971-8829/1136104

References

  1. Bandalos, D. L. (2018). Measurement theory and applications for the social sciences. Guilford.
  2. Benmoussa, K., Laaziri, M., Khoulji, S., Kerkeb, M. L., & Yamami, A. E. (2019). AHP-based approach for evaluating ergonomic criteria. Procedia Manufacturing, 32, 856-863. https://doi.org/10.1016/j.promfg.2019.02.294
  3. Berger, F., Galati, N., & Witteler, S. (2023). Making interoperability work, challenges and solutions for an interoperable higher education system. https://hochschulforumdigitalisierung.de/sites/default/files/dateien/HFD_report_no.72_Making_interoperability_work.pdf
  4. Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107(2), 238–246. https://doi.org/10.1037//0033-2909.107.2.238
  5. Bentler, P. M., & Mooijaart, A. B. (1989). Choice of structural model via parsimony: A rationale based on precision. Psychological Bulletin, 106(2), 315–317.
  6. Boudreau, M., Gefen, D., & Straub, D. (2001). Validation in IS research: A state-of-the-art assessment. MIS Quarterly, 25, 1–24.
  7. Brooke, J. (1996). SUS: A “quick and dirty” usability scale. In P. Jordan, B. Thomas, & B. Weerdmeester (Eds.), Usability evaluation in industry (pp. 189-194). Taylor & Francis.
  8. Browne, M. W., & Cudeck, R. (1989). Single sample cross-validation indices for covariance structures. Multivariate Behavioral Research, 24(4), 445–455.
  9. Byne, B. M. (2010). Structural equation modeling with Amos: Basic concepts, applications, and programming (2nd ed.). Routledge-Taylor and Francis Group.
  10. Chin, J. P., Diehl, V. A., & Norman, K. L. (1988). Development of an instrument measuring user satisfaction of the human-computer interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 213-218).
  11. Chung, T. K., & Sahari, N. (2015). Utilitarian or experiential? An analysis of usability questionnaires. International Journal of Computer Theory and Engineering, 7(2), 167-171.
  12. Cole, D. A. (1987). Utility of confirmatory factor analysis in test validation research. Journal of Consulting and Clinical Psychology, 55(4), 584–594.
  13. Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis (2nd ed.). Erlbaum.
  14. Creswell, J. W. (2013). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). Sage Publications.
  15. Diamantopoulos, A., & Siguaw, J. A. (2000). Introducing LISREL. Sage Publications.
  16. Gaebel, M., Zhang, T., Stoeber, H., & Morrisroe, A. (2021). Digitally enhanced learning and teaching in European higher education institutions. European University Association absl.
  17. Galende, B. A., Mayoral, S. U., García, F. M., & Lottmann, S. B. (2023). FLIP: A new approach for easing the use of federated learning. Applied Sciences, 13(6), 3446. https://doi.org/10.3390/app13063446
  18. Geisen, E., & Bergstrom, J. R. (2017). Usability testing for survey research. Morgan Kaufmann.
  19. Gilbert, G. E., & Prion, S. (2016). Making sense of methods and measurement: Lawshe’s content validity index. Clinical Simulation in Nursing, 12(12), 530–531.
  20. Gonzalez-Holland, E., Whitmer, D., Moralez, L., & Mouloua, M. (2017). Examination of the use of Nielsen’s 10 usability heuristics & outlooks for the future. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 61, No. 1, pp. 1472-1475). https://doi.org/10.1177/1541931213601853
  21. Gorsuch, R. L. (1983). Factor analysis (2nd ed.). W. B. Saunders.
  22. Guadagnoli, E., & Velicer, W. F. (1988). Relation of sample size to the stability of component patterns. Psychological Bulletin, 103(2), 265–275.
  23. Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2010). Multivariate data analysis (7th ed.). Pearson.
  24. Halim, E., Soeprapto Putri, N. K., Anisa, N., Arif, A. A., & Hebrard, M. (2021). Usability testing of vocabulary game prototype using the Nielsen’s attributes of usability (NAU) method. In Proceedings of the 2021 International Conference on Information Management and Technology (pp. 590-594). https://doi.org/10.1109/ICIMTech53080.2021.9534970
  25. Harman, H. H. (1976). Modern factor analysis. University of Chicago Press.
  26. Hodrien, A., & Fernando, T. (2021). A review of post-study and post-task subjective questionnaires to guide assessment of system usability. Journal of Usability Studies, 16(3), 203-232.
  27. Hu, L. T., & Bentler, P. M. (1995). Evaluating model fit. In R. Hoyle (Ed.), Structural equation modeling: Issues, concepts, and applications (pp. 76–99). Sage.
  28. Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55.
  29. ISO. (2018). Ergonomic requirements for office work with visual display terminals (VDTs)—Part 11: Guidance on usability (ISO 9241-11).
  30. Kaiser, H. F. (1960). The application of electronic computers to factor analysis. Educational and Psychological Measurement, 20(1), 141–151. https://doi.org/10.1177/001316446002000116
  31. Kaiser, H. F. (1970). A second generation little jiffy. Psychometrika, 35, 401–415.Kass, N. L., & Tinsley, R. L. (1979). Factor analysis of psychological and educational data. Journal of Educational Psychology, 71(6), 844–855.
  32. Kirakowski, J. (1995). Evaluating usability of the human-computer interface. In Advances in Human-Computer Interaction: Human Comfort and Security (pp. 21-32). Springer Berlin Heidelberg.
  33. Kirakowski, J., & Cierlik, B. (1998). Measuring the usability of web sites. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 42, No. 4, pp. 424-428).
  34. Latiar, H., Dwi, M., & Nining, S. (2024). Evaluation of repository usability test using Nielsen’s attributes of usability (NAU) model in libraries. Jurnal Ilmu Perpustakaan dan Informasi, 9(1).
  35. Laugwitz, B., Held, T., & Schrepp, M. (2008). Construction and evaluation of a user experience questionnaire. In Proceedings of the 4th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society (pp. 63-76). Springer Berlin Heidelberg.
  36. Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 563–575.
  37. Lazar, J., Feng, J. H., & Hochheiser, H. (2017). Research methods in human-computer interaction. Morgan Kaufmann.
  38. Lewis, B. R., Snyder, C. A., & Rainer, K. R. (1995). An empirical assessment of the Information Resources Management construct. Journal of Management Information Systems, 12, 199-223.
  39. Lewis, J. R. (1992). Psychometric evaluation of the post-study system usability questionnaire: The PSSUQ. In Proceedings of the Human Factors Society Annual Meeting (Vol. 36, No. 16, pp. 1259-126
  40. Lund, A. (2001). Measuring usability with the USE questionnaire. Usability Interface, 8, 3-6.
  41. MacCallum, R. C., Widaman, K. F., Zhang, S., & Hong, S. (1999). Sample size in factor analysis. Psychological Methods, 4(1), 84–99.
  42. Medsker, G. J., Williams, L. J., & Holahan, P. J. (1994). A review of current practices for evaluating causal models in organizational behavior and human resource management research. Journal of Management, 20, 439–464.
  43. Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Sage Publications.
  44. Munir, S., Rahmatullah, A., Saptono, H., & Wirani, Y. (2019). Usability evaluation using NAU method on web design technique for web portal development in STT Nurul Fikri. In Proceedings of the 2019 Fourth International Conference on Informatics and Computing (pp. 1-6). https://doi.org/10.1109/ICIC47613.2019.8985913
  45. Newman, D. A. (2009). Missing data techniques and low response rates: The role of systematic nonresponse parameters. In C. E. Lance & R. J. Vandenberg (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity, and fable in the organizational and social sciences (pp. 7–36). New York, NY: Routledge.
  46. Newman, D. A. (2014). Missing data: Five practical guidelines. Organizational Research Methods, 17(4), 372–411.
  47. Nielsen, J. (2012). How many test users in a usability study. Nielsen Norman Group. https://www.nngroup.com/articles/how-many-test-users/
  48. Nielsen, J., & Kaufmann, M. (1993). Usability engineering. Morgan Kaufmann.
  49. Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York, NY: McGraw-Hill.
  50. Osborne, J. L., & Costello, M. S. (2004). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research, and Evaluation, 9(1), 1–10.
  51. Palmieri, P. A., Leyva-Moral, J. M., Camacho-Rodriguez, D. E., Granel-Gimenez, N., Ford, E. W., Mathieson, K. M., & Leafman, J. S. (2020). Hospital survey on patient safety culture (HSOPSC): A multi-method approach for target-language instrument translation, adaptation, and validation to improve the equivalence of meaning for cross-cultural research. BMC Nursing, 19, 1-13.
  52. Polit, D. F., & Beck, C. T. (2006). The content validity index: Are you sure you know what’s being reported? Critique and recommendations. Research in Nursing & Health, 29, 489–497.
  53. Polit, D. F., Beck, C. T., & Owen, S. V. (2007). Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in Nursing & Health, 30, 459–467.
  54. Ruoti, S., Roberts, B., & Seamons, K. (2015). Authentication melee: A usability analysis of seven web authentication systems. In Proceedings of the 24th International Conference on World Wide Web (pp. 916-926).
  55. Sagar, K., & Saha, A. (2017). A systematic review of software usability studies. International Journal of Information Technology, 1-24. https://doi.org/10.1007/s41870-017-0048-1
  56. Schermelleh-Engel, K., Moosbrugger, H., & Müller, H. (2003). Evaluating the fit of structural equation models: Tests of significance and descriptive goodness-of-fit measures. Methods of Psychological Research Online, 8(2), 23–74.
  57. Shneiderman, B., Plaisant, C., Cohen, M. S., Jacobs, S. M., & Elmqvist, N. (2017). Designing the user interface: Strategies for effective human-computer interaction. Pearson.
  58. Stevens, J. (2002). Applied multivariate statistics for the social sciences (4th ed.). Mahwah, NJ: Lawrence Erlbaum Associates.
  59. Strauss, M. E., & Smith, G. T. (2009). Construct validity: Advances in theory and methodology. Annual Review of Clinical Psychology, 5, 1-25. https://doi.org/10.1146/annurev.clinpsy.032408.153639
  60. Tabachnick, L., & Fidell, C. (2007). Using multivariate statistics (5th ed.). Boston, MA: Pearson.
  61. Taherdoost, H. (2016). Validity and reliability of the research instrument; how to test the validation of a questionnaire/survey in a research. International Journal of Academic Research in Management (IJARM), 5.
  62. Vlachogianni, P., & Tselios, N. (2023). Perceived usability evaluation of educational technology using the post-study system usability questionnaire (PSSUQ): A systematic review. Sustainability, 15(17), 12954. https://doi.org/10.3390/su151712954
  63. Wilson, F. R., Pan, W., & Schumsky, D. A. (2012). Recalculation of the critical values for Lawshe’s content validity ratio. Measurement and Evaluation in Counseling and Development, 45(3), 197–210.