On a summer day at the University of Chicago several years ago Larry McEnerney, then Director of the University’s Writing Program, gave a lecture on ‘The craft of writing effectively’. It was to a class of graduate students in the social and natural sciences. From the outset, McEnerney challenged widely held views on writing. Writing, he said, is mistakenly taught as a process governed by rules. ‘Stop thinking about rules; start thinking about readers’ and ‘use writing to help [one]self think’, he urged the class. McEnerney then identified the most important quality that marks all effective writing. But he did so only after he had ticked off qualities that we all readily attach to good writing:

‘Yeh, your writing needs to be clear. Sure, [it] needs to be organized . . . to be persuasive. But more than anything else [it] needs to be valuable’ [my emphasis].
As he ticks off each quality, McEnerney writes it on the blackboard. Each quality sits above the preceding one; valuable, unlined, stands out at the top of the list. To emphasize the point he ends, ‘The other stuff doesn’t matter if it is not valuable’.
Writing in some form—as a report or an academic paper, for example—closes off most bodies of research that we undertake. It is no coincidence, then, that the defining quality that McEnerney attaches to effective writing is echoed in effective or good research. That echo reverberates in the question, ‘What is research?’, or more specifically, ‘What characterizes good research?’ There are, however, different pitches to the echo depending on who you are. To hear them one needs to clarify research in the context of the SAIMM and its membership.

The Institute serves three engineering disciplines—mining, extractive metallurgy, and physical metallurgy or allied branches in material science. These disciplines divide the Institute along lines of subject. Another division cuts across these lines, setting members apart within their disciplines. This is the division that separates engineers in industry from academics at universities. It is marked, but it is not impervious: some members—a minority—may switch ‘camps’, even temporarily, depending on whether the problem at hand is framed by an industrial or academic/scholarly need. Being a member of one or other camp is not in itself grounds for discrimination. What is important, however, is the different sense each camp attaches to what is valuable in research. We all know that, as a rule, engineers in industry place value on practical applications. They will view research favourably if it cuts costs or brings in profit; if it introduces measures that secure the safety of workers or that reduce harm to the environment; or if it raises productivity, improves efficiencies, or sets out new possibilities. Value here also has a dimension in time: the value of research might well change when judged in the short, medium, or long term. These values inform research conducted in industry. One thinks of what passed/passes as research at the former Anglo Research and at Mintek. They also inform, and this is far-reaching, the engineer as reader—‘value lies with the reader’ is a point that McEnerney makes. Here we face a dilemma that lingers at the core of the Journal of the SAIMM: many of the papers published in the Journal are written by academics, who invoke a different set of criteria, including a different sense of value, when judging the worth of a paper. Their papers will reflect these criteria, and this will not sit comfortably with the other group, which is also the larger.

Compounding this problem between camps is a blight within the academic camp. To appreciate it we need to take stock of how academics view research. What for the academic constitutes ‘good’ research, where ‘good’ refers to a standard? (I am discounting the notion of excellence, which connotes ‘excelling’—pre-eminence or superiority—and therefore practices of comparison and performance.) All of science is marked primarily by asking questions. Good science or research asks questions of significance, questions that address a key, if not fundamental, concern, questions that look for insight. Nothing is trivial about them—profound might be no exaggeration: In the beginning is the question. Value for the academic consists primarily in the ‘reach’ of these questions. That is not to say that industry does not ask significant questions. Its questions differ not by degree but in class. Questions are less likely to be fundamental than applied, they are less about understanding fundamental processes than about feasibility at the industrial scale. Business and industrial criteria frame the applied category; it is they that impart significance to the questions asked. Not so in academia. Asking the right question is what some academics call ‘Research, with a capital R’. There is no algorithm, rubric, or procedure to finding that question. It is up to chance. One can stack the odds, however; for chance, in Louis Pasteur’s memorable phrase, ‘favours the prepared mind’.

Not only is finding the right question difficult, but preparing one’s mind is hard work, and that contributes to the poor quality of many papers produced at universities. This quality reflects inferior research. The greater part of preparing one’s mind is reading extensively, both within and without one’s field of study. The effort is huge. That we understand English brings little comfort. But not knuckling down to the effort is only part of the problem. The other part is the element of blissful ignorance. The English historian Eric Hobsbawm, in an interview with Simon Schama, a fellow historian, summed it up poignantly when he despaired of historians’ (read engineers’ and researchers’) ‘using the by-products, not the thinking’. What are the by-products of our profession? I suggest they consist in two activities that, properly engaged, support good research/Research. They are method and procedure, which is coupled with technique. Method, as implied in the philosophical label ‘scientific method’ and understood by eminent scholars, refers to the logic we use to validate a thesis or hypothesis, to argue a case, or to work to a solution. (Method is not to be confused with methodology, which is the study of method, an activity that exercises the minds of philosophers.) any of the engineers with whom I have engaged have only a passing knowledge of the logical processes they use to arrive at technical answers. At best, they are unaware of how they think but get it right, or they hide behind statistical tests in the mistaken belief that rigour leads to objective truths; at worst, they run foul of the asymmetries and rules in logical structures. Flawed logic calls into question the validity of research. Nevertheless, papers demonstrating flawed logic continue to be submitted for publication. Some of them slip through the net of peer review and make it to press.

Whereas method is abstract and remote for engineers, empirical procedures are concrete and reassuring. They set out which tests will be conducted, how these tests will be run, and the techniques (the instruments) that will be used. How many engineers know that all these activities are governed by theory—theory appropriate to the principles underlying a technique and theory appropriate to the problem that is the object of a study? Yet the Publications Committee receives papers in which procedures and techniques are disconnected from the problem. It is as if understanding (from theory and principles) and judgment have been suspended. But like specious arguments, procedures, techniques and their inscriptions (graphs and tables) display the trappings of science. They dazzle researchers as much as these practitioners hope to dazzle readers. The satirical BAHFest (Festival of Bad Ad Hoc Hypotheses) plays on this sophistry (much as the Ig Nobel prize ‘celebrates’ ‘trivial questions pursued as research’). The misuse of method and procedure is, I suggest, the ‘by-product’ that, along with asking trivial questions, displaces thinking in poor research.

I have not mentioned communication, the writing of academic papers. It stands apart from questions, methods and procedures in that it does not correlate with good or bad research. Bad writing, however, might well relegate good research to the peripheries of science, if not to oblivion—unless a sympathetic editor, looking through the mist of text and discerning forms of value, gives the authors a chance to redeem themselves.

The papers in this issue of the Journal are not collected around a theme. There may be something here that, consequently, interests a broad section of readers. Ask yourself what value you attach to the point or points of interest you find in any of the papers.

P. den Hoed

†This editorial arose out of introductory remarks I made at a meeting of the Publications Committee in March this year. It owes much to the discussion that followed those remarks. I am indebted particularly to Dick Stacey and Rodney Jones for their thoughts in personal communications following that meeting. I trust that I have not misrepresented them. I have had many hours of discussion with two senior colleagues—Hurman Eriç, Chamber of Mines Professor of Extractive Metallurgy, and David Lewis-Williams, Professor Emeritus of Cognitive Archaeology, both of them at the University of the Witwatersrand.