BlueSky: @jcassidysport.bsky.social
I attended the Australasian Skill Acquisition Network (ASAN) conference in Southern Cross University in the Gold Coast this week. I really enjoyed the diversity of insights, the critical discussions, and the opportunity to connect with people who I have only connected with online previously. I was particularly impressed with the supportive nature of the discussions and great credit must go to Kyle Bennett and William McCalman who organised, and all presenters and attendees who took part.
I have several reflections stemming from the conference and my thoughts and reflections over the last few weeks. One reflection is around the translation of skill acquisition knowledge and research into coaching, but this post is not about that. I am happy to discuss this with anyone, but the thoughts behind this post are related to other key themes from the conference.
Replication v representativenes
Two common themes from the conference presentations were replication and representativeness. A presentation block on the first day was around representativeness, while a presentation block on the second day was around replicability in skill acquisition science. There were two keynote presentations dedicated to the replicability aspect too.
Leanne Choo (2024) presented some of her early work on the replicability (or lack thereof) present within skill acquisition research. Without replicable findings, much of the skill acquisition support is based on assumptions. She spoke about her experiences transitioning from athlete to coach. For her, now as a coach, due to the lack of quality science, there are no options for her but to rely on experiential knowledge, because the evidence base in skill acquisition is so weak. She has been striving to critically assess her practices, but she cannot do that due to the lack of high-quality evidence in practice design.
William McCalman (2023) presented work that he and colleagues did on skilfulness assessment, and defined skillful players as “technically proficient, adaptable, effective decision-makers, and influential to their team’s success.” This paper, along with Bennett and Fransen (2023) (which I made reference to here) provide some great insights into what is really seen in the coaching world. If we are going to assess someone’s skilfulness (I.e. their ability to adapt), we need to provide opportunities for them to adapt. We can do this by providing variability within the environment or ecological validity. But to do this, we need to relinquish control of the environment – the control necessary to ensure replicability.
I am particularly interested in the links between these two presentations. Does a middle ground even exist, or is it no man’s land? If we try and replicate a representative assessment, then two things happen, we sacrifice some ecological validity to reduce how representative the environment is, but we also don’t control enough to make the environment replicable. My own thought – a classic case of chasing two rabbits and catching neither. I asked this question of a keynote and possibly the leader of this area within skill acquisition, Job Fransen, and he directed me to pragmatic randomised control trials, which are used to assess real world impacts of treatments in medical science. Something for me to investigate further.
Philosophy for skill acquisition
Something that stimulated so much thought in the lead up to the conference was an interesting twitter thread started by Daniel Lakens. One comment in particular rocked me onto my heels (metaphorically) – Daniel said, “I reject constructivism as a science.” And this really made me think about the entire open science framework in a different manner.
Personally, I do not reject it as a science. I think there are many ways to think about science, ranging from the traditional positivist research paradigm to interpretivism. Chapter 3 (Philosophy of knowledge, written by Cliff Mallett and colleagues) of Nelson et al. (2014) provides some great insights here:
“A similar scenario would evolve in considering all of the ‘variables’, aspects, or dimensions that might make up a quality coach. While some aspects might be measureable, others require an agreement across subjective judgments. Reconciling these different kinds of knowledge is a dilemma to be resolved for those who want to make a judgment of coach’s work.” (p. 16)
My question is now as follows. Given that we have defined skill as the ability to adapt (McCalman et al., 2023), to assess skill (adaptability) we need to give athletes a reason to adapt. To do this we need to create an environment with variability. To create an environment with variability, we need to relinquish control of that environment, which does not align well to replicable science. Is skill (acquisition) one of the variables that might require agreement across subjective judgements? Is traditional, hard science best suited to the study of skill acquisition?
If we paraphrase the above quote and say: "considering all of the ‘variables’, aspects, or dimensions that might make up a quality skill acquisition specialist". While positivist science may be useful to study some aspects within skill acquisition, it may not be useful to study all aspects, and qualitative consensus may be more useful.
Why do we need to “assess” skill or “quantify” skill performance. With this we are looking to gather knowledge of what works to develop skill and what doesn’t. The next question - what constitutes knowledge (I.e. epistemology). Which could be preceded by “what is real?” (I.e. ontology) – is there one objective truth (universalism), or is there multiple, or does truth lie in the practical consequences of ideas, or all (or none) of the above (Muhaise et al., 2020)?
Many skill acquisition scientists accept the non-positivist ontological perspectives but are concerned with assessing practice and skill in positivist ways. This is exemplified in the way questions are asked (which is more effective?) and results are presented (statistical significance versus non-statistical significance). The exploration of one’s philosophical underpinnings as to what they believe to be real and what it means to know is critical for a researcher. I am doing a Doctor of Philosophy program for a reason - the philosophy aspect needs to be understood so that worldview, ontology, epistemology, methodology are all aligned. For me in my research (my current understanding, I am far from an expert and it is quite a challenge):
Pragmatic paradigm (ideally beyond crude pragmatism - Jenkins, 2017)
Ontology: that which impacts practice (practical and pluralist)
Epistemology: knowledge is constructed by a learner based on how it influences their practice, arising from inquiry (constructivist and fallibilist)
Methodology: exploring practice in the swampy lowlands (action research).
Methods: to be finalised.
Stodter and Whitehead (2024) provide a great discussion on the use of tools that were created from positivist origins in non-positivist ways. An extract that stood out:
“To promote robust qualitative research, the ontological and epistemological underpinnings of methodologies must be examined, and then methods accurately applied to make the most of their strengths as part of a coherent approach. It is also important to articulate how and why each method is being used in line with the aims of research, enhancing the quality of conclusions, knowledge claims, and significantly for coaching, the practical implications made possible.”
Even though qualitative research findings may not be replicable in the sense that because of subjective bias, the same research processes may lead to different results among different groups of researchers, qualitative research can and should be transparent. Take interviews for example. Pre-registering the aims, the structured aspect (outlining the questions) of a semi-structured interview, number of participants, participant criteria and the step-by-step data collection process is all possible. This is a crucial point - while replication may not be possible, transparency is.
As the interviewer is commonly the primary author, they can’t not be influenced by the data they collect. Also outlining conflicts of interest, so that consumers understand how the findings may benefit the authors. Rather they should be influenced, and they should be responsive to what participants tell them. Data collection has significant overlaps with data analysis in qualitative research (Chang, 2008), and as a result, objective analysis is not only unavoidable, but undesirable.
Reconciling (types of) knowledge in practice
In the same way that reconciling different types of knowledge is an important challenge for researchers, it is also important for coaches and for collaboration in high performance teams. There are inevitable ontological and epistemological contradictions that are simply unavoidable. A common philosophical view that drives everything a support team does involves asking the question: “will it make the boat go faster?” This is adopting a pragmatic lens (what works?) but potentially adopts a positivistic lens on performance. In stopwatch sports, this may make sense on some levels. In non-stopwatch sports, this mindset can become a bit challenging. From an S&C perspective, “better” performance (greater physical output) might look different to skill acquisition (greater adaptability), which is different again to biomechanics (greater technical efficiency).
Everyone could get in the same page by having a shared vision/question of “does this make them play better?” But even with a shared view like this, how do people answer that question? It is quite ambiguous. Everyone’s different backgrounds and experiences will impact their training and how they view knowledge (epistemology) and subsequently how they collect knowledge (methodology), whether they are aware of it or not. And of course, the coach sits in the middle of this trying to make sense of it all.
With different perspectives, who is to say who is right? For me, this is where the agency of the coach plays a big role. If the coach puts pressure on staff to show their impact through objective measurements (i.e. from a positivist lens), then support staff will prioritise objective measures, and for me this may increase the risk of working in silos (there is much more nuance here, and it’s not as black and white as I have suggested). If a coach is less concerned about objectivity and more about tapping into the expert judgement and decision making of the surrounding staff, then this may be useful to deal with the swampy lowlands of practice. But there are other challenges that may arise, particularly around ambiguity of roles. An appropriate balance and smooth transition along the continuum is probably most suitable, with the coach understanding when and where to prioritise each different perspective. Not this or that, but when and where.
Final thoughts
I am in the middle of a moderated debate with Daniel Kadlec and Job Fransen that we have pre-registered, and I have learned so much around open science (I am hanging off their coattails with this) and the topic we are discussing. While I have tried to make the point that not all science is replicable (I stand to be corrected on this), all science can be transparent. Open and transparent science is an enjoyable process, but it is not a quick one. But when our primary intention is to learn, open science provides a great foundation for this to happen.
These are my thoughts and reflections; I don’t have any answers. But keen to discuss if and how these ideas impact skill acquisition or coaching. I am a complete novice in the open science world, and I would love to chat with people further along than me. Thanks to all at ASAN for the great couple of days.
References
Bennett, K. J. M., & Fransen, J. (2023). Distinguishing skill from technique in football. Science and Medicine in Football, 1-4. https://doi.org/10.1080/24733938.2023.2288138
Chang, H. (2008). Autoethnography as Method.
Choo, L., Novak, A., Impellizzeri, F. M., Porter, C., & Fransen, J. (2024). Skill acquisition interventions for the learning of sports-related skills: A scoping review of randomised controlled trials. Psychology of Sport and Exercise, 72, 102615. https://doi.org/https://doi.org/10.1016/j.psychsport.2024.102615
Jenkins, S. P. (2017). Beyond ‘crude pragmatism’ in sports coaching: Insights from C.S. Peirce, William James and John Dewey. International Journal of Sports Science & Coaching, 12(1), 8-19. https://doi.org/10.1177/1747954116684028
McCalman, W., Goddard, S. G., Fransen, J., Crowley-McHattan, Z. J., & Bennett, K. J. M. (2023). Experienced academy soccer coaches’ perspectives on players’ skilfulness. Science and Medicine in Football, 1-11. https://doi.org/10.1080/24733938.2023.2280230
Muhaise, H., Habinka, A., Wycliffe, J., Muwanga-Zake, J., & Kareyo, M. (2020). The Research Philosophy Dilemma for Postgraduate Student Researchers. https://doi.org/10.13140/RG.2.2.15085.61929
Nelson, L., Groom, R., & Potrac, P. (2014). Research Methods in Sports Coaching. Routledge.
Stodter, A., & Whitehead, A. (2024). Thinking again about the use of think aloud and stimulated recall methods in sport coaching. Qualitative Research in Sport, Exercise and Health, 1-15. https://doi.org/10.1080/2159676X.2024.2377658
Comments