Education News

Can we trust AI in qualitative research? (opinion)

John Kevin/iStock/Getty Images Plus

Walt Whitman wrote, “I am great, I contain multitudes.” In high-level social science, this serves as both a celebration of humanization and a warning of the limits of using artificial intelligence to analyze data.

Although AI can mimic pattern discovery in qualitative research in the social sciences, it lacks the visual human perspective. This is important because in high-quality work it is important to define the position of the researcher—how the researcher connects to the research—to promote trust in the findings.

Trained in the mass of human knowledge, technology such as ChatGPT is not a person that contains the masses, but the mass that does not exist. By design, these tools cannot have a single, definable view, as well as positioning, which is necessary to develop trust.

For advanced students and students, using ChatGPT as a research assistant is a tempting alternative to the difficult task of analyzing mountains of text by hand. Although there are many methods of qualitative research, a common method involves multiple rounds of interpretation within the data. The researchers mark parts of the data with “codes” that may describe a clear sentence or vague meanings and assemble them into patterns through additional cycles. For example, in analyzing interview transcripts in a study about college attendance, you might initially find codes such as “financial needs,” “first-generation status” and “parental support.” In another round of coding, these may be combined into a larger theme across family items.

Although this is an oversimplification, it is clear that this type of pattern detection is the main strength of current open source AI tools. But using AI in this way ignores the impact of researcher identity and context on qualitative research.

There are four key reasons why jumping on the AI ​​train early may be difficult for the future of professional work.

  1. The researcher is as important as the research.

Good qualitative research studies have something in common: They reject the idea of ​​self-reflection and embrace the nature of the work of translation as self-reflection. They acknowledge that their studies are influenced by the context and background of the researcher. This idea of ​​careful analysis of position, although not fully the norm in all fields of social science, is gaining ground. With the rapid adoption of AI tools for research, it is becoming increasingly important to highlight the complexity of how investigators relate to the work they do.

  1. AI is neutral.

We know that AI can have bad ideas and generate false information. But even if it weren’t, there’s another issue: Technology is neutral. It is always full of bias and knowledge of its creators. Add to this the fact that AI tools draw from a vast array of ideas across the internet on any given topic. If we can agree that articulation is key to sustaining the credibility of qualitative research, we must pause before accepting AI to be fully analyzed in translational studies. Experts agree that we don’t know how AI makes the decisions it does (the black box problem).

  1. The adoption of AI tools can have a negative impact on the training of new researchers.

In the same way educators may be concerned that relying on AI too early in the learning process may neglect fundamental understanding, with implications for the training of new qualitative researchers. This is a bigger consideration than the reliability of the results. Qualitative writing by hand builds a skill set and a deeper understanding of the nature of translation research. In addition, being able to identify and act on how you as a researcher impact the analysis is not an easy task, even for seasoned researchers, requiring a level of self-reflection and patience that many people may think is not worth the effort. It is almost impossible to ask a new researcher to enjoy a position without going through the process of writing data manually.

  1. Unlike a human researcher, AI cannot protect our data.

It is not only the position of the researcher that is missing when using open access AI tools for data analysis. Institutions require the protection of information provided by participants in research studies. While including disclosures in consent forms for data use within an AI platform is possible, the black box element means that we cannot necessarily provide informed consent to participants about what happens with their data. Offline options may exist but may require computer resources and knowledge beyond the reach of many potential beneficiaries.

So, can we trust the use of AI in qualitative research?

Although AI can serve as a pseudo-research assistant or add more credibility to the qualitative research process when used to test findings, it should be used with caution in its current form. Most important is the recognition that AI cannot, at present, provide the necessary context and context that qualitative research requires. Instead, AI applications that can be useful in qualitative research include things like providing general summary information or helping to organize thoughts. These additional activities and others like them can help organize the research process, without denying the importance of communication between the researcher and the research.

Even if we can trust AI, should we use it for qualitative analysis?

Finally, there is a philosophical argument to be made. If we have an AI that can analyze quality in a way that we find acceptable, should we use it? Like art, cutting-edge research can be a celebration of humanity. When the researcher’s self-awareness, critical questions and rigorous methods come together, the result is a glimpse into a rich and detailed set of our world. It is the context and personality that the researcher brings that makes these studies worth writing and reading. If we reduce the role of the high-level scientist to a rapid generator of AI, the passion for investigating human knowledge may end as well. Studying people, especially in an open and descriptive way, requires a human touch.

Andrew Gillen is an assistant teaching professor in the College of Engineering at Northeastern University. His research focuses on engineering education.


Source link

Related Articles

Back to top button