The SAIMM is a professional institute with local and international links aimed at assisting members source information about technological developments in the mining, metallurgical and related sectors.
twitter1 facebook1 linkedin logo
 

RMS FalconThere is much interest in the fast-developing field of Artificial Intelligence (AI), and more particularly ChatGPT, in all sectors of the economy, not the least in education, editing, and publishing. A recent webinar presented by four informed speakers and hosted by ASSAf outlined some important issues, all of which are highly pertinent to the activities in our Institute. Most particularly, this would be of relevance for our Journal and the papers published therein. A few key points are outlined below, which I hope will lead to a discussion with respect to the way forward in developing the SAIMM’s Editorial Board’s (Publication Committee’s) future policy in this regard.
Of greatest significance is the fact that, while AI in the form of ChatGPT is fun, it is not an author! In the words of the publishers of Nature, while there may be a place for it in due course, it still has problems and will not meet the requirements of publishing norms today. The challenges include the facts that it cannot provide accuracy or interpretations and explanations, it cannot be held accountable, nor can it ensure data privacy. Furthermore, it takes information or ‘learns’ from previously published data and may therefore be biased in its output. It does not have the capacity to evaluate the information so extracted. In other words, input affects output.

What AI can do with the ‘tools’ now available is background research in an extensive and thorough manner, incorporating data from all sources taken from all levels and from different qualities of papers. In such cases it is necessary to recognize that the information so derived may be biased and that, more significantly, it cannot be defended. Who is accountable for such output? Thus, while AI is useful in communicating science it may not be able to contextualize the information. It may also be adept at summarizing data for lay audiences, but it could use unreviewed material and thereby provide misinformation. AI can improve the detection of plagiarism and manipulation of illustrations.

For these reasons, AI is useful in the early stages of research. It can write an article, write purpose statements, retrieve associated references, and obtain information from the literature. Furthermore, it can analyse results.

But such abilities also raise questions.

Can reviewers use AI to peer review a paper and then use the outcome as their own work? This is not possible  due to the depth of evaluation required as such tasks require human interpretation.

Can AI be considered a co-author when paired with genuine human authors? I have had sight of such a paper submitted to another journal, and wondered what our Editorial Board would do in such a case. A team of publishing editors agreed that AI could not be considered an author as it cannot meet the rules and requirements of journals.

The answers to date have been that all authors must be active, responsible, reliable, and able to defend their stance or statement, which humans can do but AI cannot. The recommendation in this instance is for AI to be cited in the acknowledgements, with a clear explanation as to its role in the paper. However, this is not always followed. Papers are being submitted with the bulk of the work produced by AI. The validity of such a paper without clear definition of the role played by AI is unacceptable.

Questions are now being asked as to whether there are guard rails to protect against such practices. Authors are asked to take responsibility and adopt ‘best practice’; namely, to practice and clearly show transparency and accountability. All this leads to the ultimate question: Does AI diminish scholarly publishing? The discussions continue.

R.M.S. Falcon