Skepticism on ‘Fringe’ and ‘Mainstream’

This article first appeared in The Skeptic, Vol.20, No.2, pp 24-29.

Reproduced with permission

My main question here is: How far and in what ways should skepticism be directed at ‘mainstream’ thought, as opposed to ‘fringe’ thought?

I exemplify particularly (but not exclusively) from my own discipline, linguistics.

Skepticism in any given intellectual discipline is typically directed at ideas towards the outer edges of that discipline. It generally focuses upon positions within the discipline (or dealing with its subject-matter) which are not merely controversial, but so controversial or ‘strange’ that they can reasonably be called ‘fringe’ or ‘non-standard’; they are strikingly different from the ‘orthodox’/‘mainstream’ positions currently upheld by the majority of suitably qualified thinkers, and are not regarded (by these thinkers themselves) as serious competitors for these orthodox positions.

There is a contrast here with cases where the qualified thinkers are themselves seriously divided, so that there is no orthodoxy or consensus – although some positions may still be more controversial than others. Even here, however, comments of an overtly and specifically skeptical nature are often rare in the ‘mainstream’ literature itself. See later on these cases.

In many cases, the differences between ‘mainstream’ and ‘fringe’ involve some of the salient ‘facts’ of the discipline, but they may also involve explanations of agreed facts, methodological preferences, etc, and in some of the best known instances they involve entire theories. Sometimes the ‘fringe’ position or theory is novel, sometimes it is based on traditional beliefs from one culture or another, sometimes it represents continuing adherence to a position now deemed by ‘mainstream’ scholars to have been superseded (this may or may not be known to its advocates). But positions of all these kinds, in so far as they have any influence in society, are the obvious targets of skeptical attention. Such attention is much less often directed at the ‘mainstream’ itself.

For instance: skeptical medical experts consider (and usually then debunk) non-standard medical theories (‘alternative medicine’, etc) or novel cancer cures developed by unqualified experimenters; skeptical astronomers deal with UFOs or Velikovskyan catastrophism; skeptical linguists consider xenoglossia, Reverse Speech or wild philology. It is much less common to find a specifically skeptical treatment of orthodox theories of the aetiology of diseases, or of the development of a supernova, or of the historical phonology of Tamil – especially in a skeptical publication.

Note that in identifying the ‘fringe’ positions as obvious targets of skeptical attention we are not saying that they are necessarily wrong, or that the ‘mainstream’ that rejects them is right. Typically, of course, the ‘fringe’ is wrong, or very probably wrong; but only a scientific neophyte would imagine that current orthodoxies represent final truth. Orthodoxies change, and indeed some orthodoxies were at earlier times deemed highly ‘fringe’. Wegener’s theory of continental drift was almost universally rejected as ridiculous when first proposed.

However, it must be borne in mind that theories which are basically strong do usually triumph in the medium- to long-term. This has been exemplified this century by (for example) the theory of continental drift, the view that the C10-11 Norse explorers reached at least as far as Newfoundland, and the ‘minor-planet’ version of cosmic catastrophism promoted by such as Clube & Napier and applied by Alvarez (still not without controversy) to the issue of the demise of the dinosaurs. And changes such as the eventual ‘mainstream’ acceptance of these theories generally involve the ultimately overwhelming accumulation of supporting evidence and theory, not a shift towards a more positive assessment of the specific arguments or claims of earlier advocates from the ‘fringe’. It is also true that the outlook for ‘non-standard’ theories proposed by those with a good training in their chosen fields is considerably brighter than for those developed by rank amateurs. And for every Wegener (or Galileo!) there are hundreds of forgotten ‘non-standard’ thinkers who were simply wrong. 1, 2

Now the explanation for the neglect of the ‘mainstream’ by skeptics may seem obvious enough. The skeptical enterprise involves subjecting the claims of ‘fringe’ thinkers/practitioners – who are typically not themselves academics or professional researchers – to tests of the kind which we imagine (usually correctly!) are routinely undergone by the claims of ‘mainstream’ scholars. The latter receive intensive and prolonged training and examination in the basics of their disciplines; their preliminary drafts and initial pilot studies are discussed and criticised by their colleagues and others; their ‘finished’ books and papers are exposed by house and journal editors to anonymous peer-review and often rejected or returned for re-writing, and – if and when published – are assailed in a barrage of further criticism; their experiments are replicated again and again in a systematic and determined effort to find sources of error or alternative explanations.

This is not the normal career of a ‘fringe’ publication, which is typically a book written at a fairly popular level (though there are some very scholarly people on the ‘fringe’), and is published by the author or by a press with no academic pretensions. Sometimes a larger or more prestigious press risks such publication. This is what the academically unpublished Velikovsky achieved in 1950, and leading scientists of the day objected furiously to the publisher’s decision to advertise his book as a major breakthrough in astronomy and ancient history. Nowadays there are many such books, but they are usually presented and advertised in such a way that the careful reader will realise that they are not ‘mainstream’.

‘Fringe’ pieces of article length usually appear in journals which circulate largely among those who share the author’s basic ‘non-standard’ position. One such journal is the very interesting Aeon, a neo-Velikovskyan organ which (along with the web-journal Thoth) promotes ‘Saturnism’. There is a review process, but inevitably the authors and the reviewers – who form a small, close-knit group, very much on the edges of the relevant scholarly worlds – agree in upholding the basic ideas which divide them from the ‘mainstream’; reviewers will generally attack only points of detail. (Some journals, such as Cryptozoology, are liable to be judged ‘fringe’ because of their subject-matter, but seem to handle their material in a suitably scholarly manner. They may still struggle, however, to find reviewers who are qualified but not too obviously committed to the cryptozoological enterprise.)

Those scholars and other well-informed people who are sufficiently committed to the skeptical enterprise provide the processes of testing and review which ‘fringe’ publications would otherwise lack. Naturally, their conclusions and assessments are frequently – indeed usually – negative; and the relevant ‘fringe’ writers generally ignore or reject the skeptical findings (producing counter-arguments which they at least perceive and represent as stronger). This pattern has led to a situation in which some ‘fringe’ writers do not invite comment from skeptics or indeed from ‘mainstream’ scholars, or attempt scholarly publication. Others initially invite academic comment in the (perhaps naïve) hope of rapid acceptance (and ensuing fame), but meet with searching questions and objections; these thinkers quickly become disillusioned at what they regard as the biased conservatism and hostility of orthodox thought, and reject either ‘mainstream’ methods per se or the manifestation of these methods which they themselves have encountered. They too proceed to publish in non-academic forums. (Such writers often come to exhibit a curious tension between the rejection of hostile orthodoxy and a continuing desire to be embraced by scholarship; the latter is manifested in appeals to published academic writings at any point where these can – even if with distortion – be adduced in support of the ‘fringe’ claims.) In particular, ‘fringe’ writers tend to turn against the peer-review process, which appears to them (from their necessarily limited perspective) to be principally a means of suppressing novel ideas and maintaining the ‘status quo’. (This last is, in fact, a concept which looms much larger for ‘fringe’ thinkers than it does for professional ‘mainstream’ academics whose work is embedded in the explosively expanding literatures of unstable and rapidly changing disciplines!).

However: it is, of course, true to a degree that peer-review and the rest of the process of testing and review do tend to limit the publication of non-standard or novel positions. In a sense, of course, this is as it should be: disciplines are large and complex bodies of accumulated knowledge and theory, and ground-breaking changes need to be justified in strong terms. But it is natural for the advocates of new ideas – especially if they are not themselves fully familiar with the conventions of academic publication – to feel that their views are not being treated fairly when their papers are rejected on the basis of what either are or at least appear to be obscure technicalities or matters of pure convention. It is easy to laugh at the ‘fringe’ author who, for instance, refuses to include references to a tradition of scholarship which he believes his work has superseded at a blow (one should acknowledge a tradition even in arguing that it is thoroughly wrong); but not all cases are so straightforward. Editors and their readers (who should not, of course, know what qualifications or prior publications an author does or does not have) must be forever vigilant to ensure that they are not rejecting papers principally because the ideas appear too ‘iconoclastic’ – or indeed, because they themselves do not agree with them, for whatever reason. There is constant danger of this.

Another relevant factor involves the view taken by some editors that one should not comment in an academic forum (such as a refereed journal) upon ideas which have not themselves received academic publication, as in such cases there is no ‘academic audience’ for these ideas. Some even argue that academic publishing should be grounded only in existing thought in the relevant mainstream and should not (ever) be a response to amateur or non-mainstream ideas on the subject in question (one publisher’s reader recommended that the publisher reject my proposed book on fringe linguistics for just this reason). These policies, if rigorously pursued, would obviously block academic comment (even negative comment) on fringe ideas.

One example of a deep-thinking person whose work was rejected and whose academic career in fact collapsed is John Trotter. In the early 1970s Trotter was a psychology lecturer in Australia. He developed radical, ‘non-standard’ views on the logical and structural nature of language, and incorporated these into papers offered for publication and into his teaching. If his main ideas are correct, much of the basis of linguistic theory and indeed some important aspects of contemporary thought on logic must be wrong. His papers were rejected by editors and reviewers whom he regarded as inadequately informed, and he was allegedly discouraged from presenting his views to students. Since that time he has operated only as a private scholar, publishing his extremely complex and challenging ideas (which he continues to develop) in self-produced pamphlets.

I am one of the very few academics who have examined Trotter’s ideas in detail, and I have corresponded with him. I stress that I regard him as mistaken in some of his most important claims. It also has to be said that Trotter’s tone is often dogmatic and arrogant; for instance, his study notes for his students listed some ‘mainstream’ linguistic concepts which, as he stated without argument, should not be used in their essays. In this respect he does appear ‘fringe’-like and one can imagine other scholars regarding him with suspicion and indeed distaste. Nevertheless, he is altogether at a higher intellectual level than most ‘fringe’ thinkers, and in my view his ideas (whether ultimately judged right or wrong) could well warrant exposure in a suitable forum and fair scrutiny.

The same may be true of a range of other ‘non-standard’ thinkers. I have already referred to cryptozoologists; some (not all) of these researchers are extremely cautious and scientific in their approach and accept nothing without good evidence, but still struggle to publish in general zoological journals (and have in some cases given up the struggle). And, among all the hyper-diffusionist nonsense about lost civilisations and early trans-oceanic voyages of discovery, there are a number of cases where an apparently anomalous artefact or an alleged inscription deserves less hasty debunking and more scholarly attention. All fields probably have a few potential Wegeners lurking, and at any one time some of these people are probably getting a raw deal.

As noted, ‘fringe’ thinkers are generally convinced that ‘mainstream’ scholars, journal editors, etc are rejecting their views not because these views really are faulty, but instead out of hidebound conservatism, bias, fear of losing their own status when their cherished ideas are overthrown, etc, etc (this applies whether they are actually getting a raw deal or not). One of their favoured responses to skeptical criticism is to demand that skeptics act in a more even-handed way, by directing a fair proportion of their skeptical attention at the ‘mainstream’, which in their own view is much weaker than its representatives would have us believe. And Trotter, for one, has urged the Australian Skeptics to stop worrying so much about ‘fringe’ thinkers (however influential) of no intellectual significance and to concern themselves instead with what he sees as the less obvious but disastrously damaging failures of the ‘mainstream’ – notably with the philosophical under-pinnings of contemporary logic and linguistics, which he regards as grossly inadequate.

Of course, such a proposal may be in part a tactic for distracting attention from ‘fringe’ writers’ own weaknesses. Nevertheless, it is a demand which deserves an answer. A fair answer might be: ‘We have done this – and we find that the ‘mainstream’ generally holds up much better than you’, or ‘We have done this – and we have indeed found major weaknesses which the ‘mainstream’ should address (no comment here on your own weaknesses!)’; or at least ‘We will do this’.

In contrast, many scholars confronted by skeptics trained in their own field take the view that skepticism really is unnecessary in the context of ‘mainstream’ thought. They believe that the safeguards outlined above really do work well enough to obviate the need for specifically skeptical examination. In my own field I have been asked what difference there is between skeptical linguistics, as applied to the ‘mainstream’, and just plain linguistics, conducted within the usual academic constraints. This view is understandable, and, as noted, is not entirely wrong; but the amount of doubtful material which achieves serious publication suggests that additional vigilance is indeed needed. Linguistics is, in fact, among the fields where this may be especially useful; see below for some of the reasons for this and some examples 3.

And it is certainly arguable that the degree of conservative bias which inevitably affects publication and acceptance of novel ideas does mean that some of the more obviously ‘mainstream’ works which are published may indeed owe too much of their success to their ‘mainstream’ status. It is almost as easy for a professional journal to be more tolerant than perhaps it should of a paper whose conclusions do not threaten the basis of the orthodoxy upon which its editors and others have rested significant parts of their reputations, as it is for Aeon to accept less than perfection in a Saturnist article.

A recent, particularly gross case of this kind involved the 1996 ‘Sokal Hoax’, which was admittedly located in the rather surreal intellectual world of postmodernism. Briefly, Sokal wrote a spoof paper on the interface between physical theory and postmodernism, using highly favoured postmodernist motifs and arguments to support a claim which was in fact (quite transparently) utter nonsense (it implied an extreme relativist view of macro-physics). The paper was published as a serious submission by one of the key journals in the relevant field, and the upshots continue to reverberate. But at a more modest, less obvious level, there is no doubt that views in accordance with the prevailing orthodoxy (or sections of it) have a greater chance of publication – to such an extent that a certain proportion of what is published is probably not as soundly based as one would like to expect.

There are also other factors which may make it more or less difficult to publish. The Sokal case illustrates the fact that some aspects of postmodernism are currently very ‘trendy’ and indeed ‘politically correct’; papers espousing the relevant views are liable to be favourably regarded. Another, not unconnected area which is currently much in favour is ‘multiculturalism’ (viewed positively), and another is ‘environmentalism’, at least in its more extreme forms (see below). Even anonymous reviewers (whose identity is known to the journal editor, at least) may prefer not to appear hostile to papers written within these frameworks of ideas (though there may be much to object to); and in public presentations (at conferences and such) where one can be identified it often requires considerable courage to speak in criticism of such an offering.

This trend also means that papers endorsing views contrary to those in political favour may struggle to achieve publication, even if they (and their authors) are otherwise sound; or, if they do achieve it, they may then be subjected to withering and arguably biased criticism. This is what many believe happened to the anthropologist Derek Freeman when in the mid-1980s he began to publish his strongly-expressed criticisms of the theoretical assumptions, methodology and conclusions of Margaret Mead in her seminal work in Samoa. In the view of Freeman and his followers, the rejection of his ideas by many ‘mainstream’ anthropologists (perhaps especially the Americans) was grounded much more in a multi-layered, arguably unscholarly, definitely exaggerated loyalty to the memory of Mead (an American and something of a ‘mother-figure’ to many younger anthropologists) than in any real faults in his own case. Freeman naturally believes that the ‘mainstream’ of his discipline should accept his viewpoint, but after some 17 years this cannot be said to have happened; scholars are divided, with many still regarding Freeman’s ideas as overstated or indeed wrong. In recognition of his struggle, he has been perceived by many as a skeptical hero, and in 1996 he was voted Australian Skeptic Of The Year. As one whose argument is with a large component of the relevant ‘mainstream’, he is a very unusual recipient of such an honour when compared to most other recent awardees, who have been feted for their struggles against ideas of a clearly ‘fringe’ nature. The case continues to excite great interest; there is little evidence of serious compromise, on either side.

Another prominent skeptic who has encountered even more trouble on this front is Ian Plimer. Plimer has suggested that the forces of ‘environmentalism’ (which is the equivalent of a religion for many of its devotees) have reached such a position of political strength that it has become difficult (even for one of his eminence) to publish sound academic papers arguing that some of the claims made are unduly alarmist or may involve misinterpretations of the evidence. He himself has encountered a surprising amount of difficulty, for one so highly qualified/respected, in obtaining publication for views of this kind which he himself has reached on the basis of his professional understanding of earth sciences. And even when material of this type finds scholarly publication, it is often difficult to secure a fair hearing for it in the popular domain. The perceived need to support the environmentalist agenda is typically too strong. As a result, few non-experts are even aware that there is a well-informed, non-trivial body of dissenting opinion on these issues.

I stress again that none of this implies that skeptics – Freeman, Plimer, or anyone else – are necessarily correct in opposing ‘mainstream’ or majority viewpoints, any more than ‘non-standard’ ideas must inevitably be wrong. The issue is that of obtaining a fair hearing, especially when one is well qualified on the matters at hand. Where this becomes difficult, the need for skepticism about the ‘mainstream’ will obviously increase.

I said before that ‘mainstream’ linguistics is perhaps more in need of skeptical attention than some other disciplines. One reason for this is the relative lack of consensus or orthodoxy in linguistics, and how this is handled. I noted at the outset that such cases exist, and linguistics is a discipline replete with them. There are many competing ‘schools’ or ‘paradigms’ within many of the branches of linguistics, differing from each other on such fundamental and basic issues as, for instance, the correct grammatical analysis of English sentences as straightforward as Mark has drunk his beer. (This fact has been known to surprise American readers who know some linguistics, as in the USA there is perhaps rather more of a dominant cluster of ‘paradigms’ in syntax and general linguistic theory, as presented to lay or undergraduate audiences: the ‘generativist paradigm’ associated especially with Noam Chomsky. Elsewhere, Chomskyan linguistics, while important, is only one among a number of leading ‘paradigms’.)

Of course, all fields display some differences of this kind. Such differences – and indeed changes over time within the orthodoxy of a discipline – are often seized upon by ‘fringe’ writers as evidence that the discipline in question is shot through with disagreement, inconsistency and instability; it is ready to collapse and is held up only by the hidebound conservatism of the ‘gurus’ of academia. This kind of position is typically overstated and founded in limited understanding either of the discipline itself or of scientific method (or of both). But in the case of linguistics the degree of disagreement is so great that ‘fringe’ critics would seem to have a case. The British linguist Richard Hudson, well aware of this problem, compiled a long list of statements on which a large and diverse sample of linguists did agree. This was a valuable exercise and examination of the list serves as something of a corrective to any truly extreme view of the case; but many of the statements are rather general, and one does not have to go far into linguistics to find disagreement on very basic points.

And linguists more generally have not been conspicuously effective in dealing with this problem. Many, especially those influenced by postmodernism, seem to adopt a quasi-relativist view on which it is acknowledged but is not presented as problematic. It is common for researchers and writers to announce, or to ask each other, which ‘paradigm’ or ‘framework’ they are using (or what their ‘assumptions’ are). Now of course it is possible for different ‘frameworks’ to address different aspects of the same data or issue, without there being any contradiction. But in many cases the different ‘frameworks’ do appear to be dealing with overlapping matters, and to be incompatible; all but one of them, at least, must be wrong, or at best seriously inadequate. Or, at any rate, no attempt has been made to reconcile them; and this is often not perceived as a matter of urgency. One can make any set of ‘assumptions’ which is not self-confounding, and can ground vast amounts of detailed extrapolation in them, with little fear that anyone will actually try to disprove any of these ‘assumptions’. Non-linguists might be surprised at the limited interest that some linguists show in the question of how far their ‘assumptions’ and ‘paradigms’ – or in some cases even their specific theories – might actually prove demonstrably preferable to alternative ideas.

Of course, one reason for this lies in the nature of linguistic data. Linguistics is an essentially empirical subject; but, in the more abstract or speculative areas of such a domain, it is not always easy to adduce decisive reasons or evidence for preferring one account or analysis to another. However, it is surely better to seek to address this kind of issue with whatever decisive evidence may be found, rather than to forge ahead at great length with any one ‘paradigm’ in circumstances where there can be little confidence that it really is the best available. (But a further problem here lies in the fact that different ‘schools’ do not by any means always agree even on what is valid evidence: some prefer cross-linguistic typological data, some deal rather in more abstract analyses of individual languages and/or in the often dubious judgments of native speakers about what does or does not occur in their own usage.)

The training of academic linguists and the nature of many linguistics departments contribute to these problems. Some departments have a strong bias towards one ‘paradigm’ or another. Many of these ‘paradigms’ have now developed in such depth and detail that students must spend several years familiarising themselves with one ‘paradigm’ before their grasp of the material is at such a level that they can make fresh contributions at the ‘cutting edge’. Differences within the ‘paradigm’ are discussed, but its basics are typically unchallenged. Many Chomskyan linguists I have met have reported that they have never been required to justify their basic analysis of Mark has drunk his beer (which in fact involves a rather traditional conception of the structure of clauses). And many of the central concepts of each ‘paradigm’ make sense only within that ‘paradigm’. For an equally highly trained outsider, the issues being debated are often meaningless.

This would be more acceptable if there were one leading ‘paradigm’, very generally accepted for sound reasons. As things are, however, the situation discourages advancement towards a more unified discipline through the resolution of differences between ‘paradigms’. By the time some academic linguists qualify, they have invested too much time and effort in the learning of one ‘paradigm’ to contemplate rejecting it or querying its fundamentals, and they know too little of any other ‘paradigms’ to even assess them fairly. In these conditions, many linguists are not overtly skeptical either of their own or of others’ ‘paradigms’. They live to a large extent in separate intellectual worlds (as becomes clear at, eg, postgraduate student conferences). Even the more obviously doubtful aspects of the various ‘paradigms’ are only occasionally subjected to the close, unbiased scrutiny which they deserve by those who know them well enough to do so.

Indeed, there is a further tendency, most obvious in the case of some Chomskyan linguists, to take the view that where ‘paradigms’ differ one’s own ‘paradigm’ is obviously correct, and in fact to present one’s own ‘paradigm’ as the truth or at least the current undisputed orthodoxy. Students often obtain this impression from 1st-Year textbooks written by American linguists of a Chomskyan bent. Indeed, Chomsky’s department at MIT at one time offered only one course (out of very many courses in all) about the whole world of non-Chomskyan linguistics; this course was popularly known as ‘The Bad Guys’ and the ideas involved were treated with some disdain! Things are probably better now; but even linguists whose work is quite distant in its concerns from the main foci of generativist interest may say things like: ‘I suppose I ought to re-cast my ideas in the light of Chomsky’s new framework’ – without necessarily having really been persuaded that this new framework really is the best available.

In addition, most currently fashionable linguistic theories fail at many points, making numerous predictions which are not borne out, or avoiding this only at the cost of insufferable non-specificity or a degree of abstraction that precludes empirical testing. This applies to more legitimately abstract ‘core’ areas such as grammatical theory, where many differences – especially between ‘schools’ as opposed to within them – do not seem readily resolvable from evidence. It also applies, very obviously, to some quite strictly empirical sub-fields such as second language acquisition, where there are competing, often simplistic-sounding theories, all of them apparently falsified by large amounts of existing data. And there are also major issues of this kind in ‘trendy’ areas such as feminist/postmodernist linguistics and some areas of sociolinguistics, where there is a great deal of tendentious ‘theorising’, some of it apparently conducted for its own sake and very little of it susceptible to empirical testing. Linguists of these kinds love ‘theory’ and often appear to ‘over-theorise’; they also try to develop precise (but often empirically unverifiable) theoretical definitions of notions (such as ‘speech community’) which in fact seem to be incorrigibly vague. In addition, some of the argumentation used is fearfully loose and replete with non sequiturs. This is perhaps especially common in overtly feminist work, where it is often apparent that politically favoured conclusions and analyses (not necessarily the same ones over time!) are sought from the outset and are not subjected to adequate criticism.

For instance, one recently published and very highly acclaimed book on linguistic variation commences with a long critical review of earlier work on this theme, identifying many unproven assumptions which may have distorted the findings of this work. Much of this criticism is itself reasonable; but the author then sets up rival (and politically preferable) hypotheses/assumptions and presents them as very likely to be correct without rehearsing any empirical evidence to support them or even referring to texts where such evidence is presented. And some of these new assumptions do not appear especially plausible, or at least appear oversimplified; one can certainly not simply regard them as almost self-evidently correct.

In this kind of work, another common reaction to the existence of rival analyses is to ‘deconstruct’ these analyses, or at least those which the linguist herself does not favour. This postmodernist and indeed often relativist strategy serves to draw attention away from the (hopefully empirical) question of which analysis is preferable, to the interesting but logically separate question of the motivations of those who proposed the analyses and the intellectual influences upon them.

Alarmingly, some linguists are so committed to their ‘paradigms’ (etc) that they are willing to adopt ridiculous or empirically empty analyses (or even to ‘fudge’ data) rather than reject the relevant ‘paradigm’. Some groups of responsible, empiricist linguists (the ‘English Empiricists’ associated with the University of Reading, many typologists, the highly critical Geoffrey Sampson) have been drawing attention to this problem for years, but as recently as 1999 I was in a discussion (on a conference paper on this theme) in which the point was apparently new to some and had to be hammered home again. Some linguists have adopted truly ludicrous positions in this kind of context. The creationist linguist Mark Baker (who admittedly has an ‘axe to grind’ here) has denied that Italian is descended from Latin, because a highly specific and contentious theoretical claim suggests that this is impossible.

In this situation, skepticism about much of what is published in the linguistic ‘mainstream’ is clearly warranted. Amorey Gethin, a not ill-informed if ‘one-eyed’ critic, has argued that the entire discipline of linguistics is essentially nonsense. While I would naturally not go this far, I can understand an outsider reaching such a conclusion. And a number of insiders, thoughtful linguists who have been more able than most to remain independent of the various ‘paradigms’, have written of these matters in an essentially skeptical way (while not necessarily identifying as skeptics). The most prominent of these is Sampson, who has antagonised some other prominent linguists by arguing very persuasively that their pet theories are empirically empty or obviously contradicted by inconvenient data. Sampson, in fact, goes along Gethin’s road at least so far as to suggest that many of the unexplained facts and theoretical issues debated by linguists may find their solutions in other domains such as psychology, and that – while there is a clear role for linguistic description – a truly valid general linguistic theory would thus be minimal in scope. There are also some papers arguing along these lines which should be better known than they are, including a number by Hammarstrom. The most important of these (in a 1971 journal) illustrates how the published views of very eminent linguists may appear ludicrous when looked at in a different (more realistic, more common-sense?) way – and without undue respect for their reputations. However, there is clearly a need for much larger amounts of overtly skeptical activity of this kind from within the ‘mainstream’ of linguistics (and other such fields).

It should be repeated that the level of critical thinking among professional linguists is obviously much higher than among ‘fringe’ linguists. If one were faced with a choice between a ‘mainstream’ linguistic position and any given fringe view of the same issue, one would be wise to accept the former. But in the ‘human sciences’ it is easy for scholars – despite assurances to the contrary – to be tempted to go beyond what the evidence and reasoning will bear, especially where they have theoretical ‘axes to grind’ or where they are ideologically motivated. As I have illustrated here with linguistics, a renewed dose of skepticism (whether or not so identified) can often be of great value in such a context. And cases such as that of Plimer suggest that even in the ‘mainstream’ of the so-called ‘harder’ sciences a degree of skepticism is by no means out of place.

But if this skepticism is to be grounded in adequate knowledge of the relevant disciplines, the observations of enthusiastic amateurs will not be enough. Outsiders who comment on technical disciplines are seldom sufficiently versed in them, and it is easy for them to appear ‘fringe’ themselves (as indeed Gethin does at times), even in cases where particularly intelligent amateurs are doing their best to deal with the difficult subject matter. We must interest scholars themselves in the pursuit of skepticism with respect to the ‘mainstreams’ of their own disciplines.

Notes

1) I take it, then, that the main focus of skepticism is, or should be, the approach of those on the ‘fringe’. Our legitimate targets are our targets because they are unscientific, irrational, wilfully ignorant of well-known counter-arguments, etc – not merely because they are (currently, at least) believed to be mistaken.

2) As noted, it is not always the case that the advocates of ‘fringe’ positions, even if not formally trained, would not know enough to discuss the more sophisticated and difficult ‘cutting-edge’ issues with which the relevant ‘mainstream’ is concerned. Many do not have such knowledge, of course; but there are also some extraordinarily well informed people out there with highly non-standard views.

3) There are many ‘mainstream’ scholars, of course, who also reject the skeptical enterprise in its more usual form as directed at the ‘fringe’, or at least decline to be involved in it themselves. This is partly because they regard most ‘fringe’ beliefs as unworthy of critical attention (rather like Trotter, though they would not share his views about the ‘mainstream’). It has to be said that some scholars who think like this display their limited awareness of the difficulties lay people have in distinguishing between the ‘fringe’ and the ‘mainstream’, or between well- and badly-founded ideas. Naturally they themselves will not be seduced by superficially plausible but ultimately ridiculous ideas in their own field, but that is not the point. It could be held that they are ‘living in ivory towers’ and in some cases more or less deliberately abrogating the responsibility they surely have (collectively, at least) to inform an otherwise vulnerable public on such matters. Ian Plimer has demonstrated very well indeed how eminent academics can shoulder such responsibilities. (But another factor here is reluctance to give credibility to ridiculous beliefs even by challenging them. This latter is obviously an arguable and sincerely held position, and in some cases it may genuinely appear that ignoring a ‘fringe’ position is indeed the best course.)

Newbrook, Mark. 2000.

Dr Mark Newbrook is a skeptic, linguist and football hooligan. He managed to combine two of these interests to help pioneer the field of Skeptical Linguistics.