Fluxo Contínuo

Technoscience, regulation and language manipulation

Tecnociência, regulação e manipulação da linguagem

Juan Bautista Bengoetxea [a]
University of the Balearic Islands, Spain

Technoscience, regulation and language manipulation

Revista de Filosofía Aurora, vol. 33, no. 58, pp. 228-244, 2021

Pontifícia Universidade Católica do Paraná

Received: 24 May 2020

Accepted: 17 September 2020

Resumen: The article focuses on some discursive defects that influence on decision-making around issues of science and technology (technoscience). Particularly, the nature and use of the linguistic phenomenon known as bullshit are analysed, and the results of this analysis are placed into the general context of the controversy about climate change. The conclusion emphasizes the relevance of avoiding confusions and humbug in the information available to the public and linked to decisions in the realm of science policy or regulation.

Resumo: O artigo enfoca alguns defeitos discursivos que influenciam a tomada de decisão em torno de questões de ciência e tecnologia (tecnociência). Particularmente, a natureza e o uso do fenômeno linguístico conhecido como bullshit são analisados, e os resultados dessa análise são colocados no contexto geral da controvérsia sobre as mudanças climáticas. A conclusão enfatiza a relevância de evitar confusões e embustes nas informações disponíveis ao público e vinculadas às decisões no âmbito da política ou regulamentação da ciência.

Palavras-chave: Tecnociência, Língua, Besteira, Regulamento, Clima, Controvérsia.

Keywords: Technoscience, Language, Bullshit, Regulation, Climate, Controversy

Reflection about science is largely due to ideas, concepts and theories that philosophers of science have proposed over more than a century in the form of a discipline. At the start, it was understood that philosophy should examine rational questions around scientific knowledge, not so much the social, economic, political or sociological ones. However, like a house of cards, the projection of this image was stopped in the 1960s when, among other things, it was openly accepted that science has a lot to do with technology and with social issues, to the extent that sience is linked to military and civil achievements by the different engineering enterprises (HARRIS, PRITCHARD AND RABINS, 2009). This suggested that epistemology had to make its own way through other practical paths closer to real social dynamics.

Regulation—some talk of 'regulatory science' (TODT, RODRÍGUEZ, FERNÁNDEZ DE LÚCIO, 2010, pp. 44-49)—and, in more socially extended terms, science policy, constitute an activity that goes ahead almost imperceptibly along the corridors of public perception with the aim of sewing the external remnants of technoscience (its social part) and the inner ones (supposedly, the rational ones).[2] As a consequence, regulation has been placed on both sides of technoscience: on the outside, insofar as it is tantamount to the public management of budgets and legality and, on the inside, since it aims to raise the standards of research quality and of technoscientific control processes. This allows us to distinguish two types of relationships between regulation and technoscience, namely regulation for science, on the one hand, and science for regulation, on the other. This second type is also known as 'science-based regulation’.

The text is organized in four sections. The first one examines the relationship between regulation and science, as well as the possible influence of introducing bullshit strategies into this relationship. The second section deals with both the role of expertise in technoscientific controversy cases and the danger of bullshitters actions. The third section highlights the existence of a very relevant sort of bullshit (the golden bullshit), selected in order to create confusion into some important discussions on science policy. Specifically, some details in the climate change controversy will be briefly shown. The last section presents several conclusions aimed at emphasizing the importance of avoiding confusion and deception around the information people have on decision-making in science policy.

Science Policy: Regulation for Science and Science for Regulation

Since the early 1990s, a new trend that acts and reflects on technoscience has been developed in philosophy and some social sciences. It is known as ‘scientific policy or regulatory movement’ (BRUNNER, 1991). It is an approach that protects and fuels the progress of public, systematic, fast and effective processes for decision-making. In 1968, Harvey Brooks divided the field of science policy into the two mentioned parts, regulation for science and science for regulation, which obviously separate, or do not treat as if they would be the same activities, regulation and technoscience. There is little doubt about this, especially if we recognize that one of the purposes of regulation is to strengthen the relationship between technoscience and society. Emphasizing this relationship, according to regulators, would make the advancement of technoscience more sound (it would be the case of a regulation for science) and this field would help make better epistemically, socially and politically informed decisions (in its democratic sense).

However, we can say that a special guest starring is still missing when it comes to saying something about regulation. The philosophy of technoscience has not shown itself until a few years ago and unfortunately theoretical leaders of regulation have not realized this. Doesn't philosophy have anything interesting to say about science policy? Hadn't we resolved that it was the philosophy of science, along with the history of science, that linked these issues to cultural issues since the 1960s?[3]

Regulations and decision making are disciplines, activities, and commitments that govern technoscientific issues. The former ones account for many activities that take their shape through public decision-makings and that at least is why they deserve to be taken into account. But inquiry into scientific policy has often been put on hands of disciplines other than philosophy. After the Second World War, to mention a particular fact, it was Harold Lasswell who coined the expression ‘science policy’, and since then various intellectual traditions have dedicated to examining the subject, although none of them has been of a philosophical nature. For example, some have carried out political studies (NAGEL, 1994) and some else inquiries about science and technology studies (STS)—Lambright (1998), Sarewitz and Pielke (2007), and Britt Holbrook (2015)—, but hardly anything has been focused from an inherently philosophical point of view. I think that philosophy has actually a lot to say about it, even in terms of conceptual analysis.

In epistemology, ethics, or whatever field where philosophers show their best skills, there is at least a venerable sort of scrutiny, typical of the twentieth century, which aims to elucidate our thinking tools and to construct good maps of reality. If we consider, as I do, that the philosophical analysis of concepts and discourse (language) is a laudable task, it may be interesting to say something about a concept that appearently does not belong to philosophy.

Language manipulation: the bullshit phenomenon

'Bullshit' is a word that bears peculiar connotations. In philosophy, it has not been translated to several languages, likely due to those slang connotations.[4] I shall use it as a philosophical language loan. What is bullshit (the notion or the fact)? It could be said that it is a sort of discourse whose goal is to confuse the listener, reader or receiver (see PENNY, 2005). It looks like just the opposite to clarify what is said. It is not an elegant or precise term, but it connotes something that speakers understand quite well. The term ‘cheat’ is almost a synonym of it, although some differentiating nuances emerge here. It is crucial to see that bullshit works well when a language user aims to confuse and puzzle a listener or receiver. A bullshitter is, at least, a confusion generator.

But anyway I consider the notion of bullshit to be a solid ground for making a conceptual inquiry into manipulation of language, first of all because it represents an intentional act. It always points to both something and getting something (BENGOETXEA, 2017) by being integrated into, say, advertising or propagandist and political discourse without too shame and, by means of media and certain rhetorical strategies (BLOCK, 2019, pp. 58-61), it has become a sophisticated technique. How are we to conceive of a conceptual category as 'shameless' as this? Until recently, the most general way to examine this concept was through Harry Frankfurt’s agency approach (FRANKFURT, 2005). Faced with this, Gerald A. Cohen (2001) proposed several alternative theoretical suggestions. The content approach stands out among them. And a third way to treat the topic was the one that Vanessa Neumann presented in 2006 and which I shall call the functional approach.

Frankfurt’s approach focuses on analyzing bullshit agents. His starting question is this: when do we know that someone is causing confusion? When do we know she is messing things up? According to Frankfurt (2005, p. 33), if we want to define ‘bullshit’, we have to conceive the term in this way: when the sender of a message doesn't care at all about the truth value of the message, but she tries to hide this lack of interest about it, then she is bullshitting. Frankfurt, therefore, aims to find out the identity of the agent, not so much that of the product created by her.

Cohen's goal is established in ontological terms. His goal is bullshit’s content: what is getting confused? What is hidden through confusion? His limitation is that he hardly pays attention to the common, everyday situations but rather examines correct, sophisticated, and subtle academic contexts. We could say that, according to him, ‘bullshit’ and ‘lack of clarity’ (or ‘folly’, ‘nonsense’) are synonyms. However, neither Frankfurt nor Cohen can satisfy all intuitions that people usually have in front of a bullshit phenomenon. It seems to me that there is something else that we must take into account. As a complement, I’d like to underline that Neumann's functional proposal serves to add something about mechanisms and results of a bullshit phenomenon. According to her (NEUMANN, 2006, p. 203), bullshit is a genre of discourse that aims to generally distract or obfuscate a speech in order to give rise to a certain desired effect. This appears to be a valid alternative to see how bullshit mechanisms operate in political discourse. In particular, I am interested in highlighting two aspects of bullshit that Neumann shows: on the one hand, whoever makes bullshit usually presents or describes herself as someone different (usually better) than she is. With this, she wishes to offer a desirable impression to a listener or receiver of her message. On the other hand, whoever talks bullshit also tends to pursue the identification of groups of 'like-minded' people that helps to justify her courses of action (NEUMANN, 2006, pp. 205-206).

However, in the context of discourse manipulation, we should not confuse bullshit with lies.[5] A statement or speech can be true and also bullshit at the same time, since it is not clear and further its immediate goal is to create semantic opacity. Therefore, ‘bullshit’ and ‘lie’ are not synonymous terms. There is something else, something similar to a fake bet, but not identical to the latter. Often, bullshit is tantamount to running a strategy or plan by an underlying mixture of true and false statements in order to eventually generate confusion and a pretending contempt to truth values.

Between technoscience and politics: experts, controversies and ‘bullshitters’

At the momento of achieving scientific or regulatory policies, it is usual to take into account various scientific results, although not always adequately. The discussion in this regard is open (see FUNTOWICZ, RAVETZ, 1990) and increasingly complex. Unfortunately, more and more effort is being invested in developing and applying bullshit to generate confusion. It is true that the interface between technoscience and science policy is complex and that it requires many virtues, epistemic (technical and scientific competence) and regulative ones (regulatory wisdom and ‘decisional’ calm), since it will be in term of these virtues that we likely shall be able to scrutinize and shed light to the notion of scientific counseling.

This question has a Platonic pedigree (Republic): how can we get reliable expert advice regarding technical issues that have to do with policy and regulation? Advices are increasingly precise and demanding in science policy and this situation has opened up a wide field for new questions. One key question is this: who should we trust the expert advise addressed to the regulatory adviser?[6] This is a crucial turning point where we can find a basic element of a bullshit mechanism. That is to say: under the umbrella-term ‘expert’ (expertise), a whole set of controversies has arisen and, unfortunately, these have taken the form of an intellectual discussion led by those who progressively design lies, hoaxes, frauds, inventions, rumors, whispers, falsehoods and a whole myriad of pitfalls. What an expert supposedly throws out through the window, the ideologist introduces through the door again.

I shall point out two common sorts of bullshit that bullshitters tend to enter the interface between technoscience and science policy (DOUGLAS, 2006, p. 215):

[1] A lot of policy decision-making is based on empirical evidence, more or less probable, and on technical details—they should do this way from an evidence-based perspective, at least. From those evidences the first sort of bullshit arises, namely the one that manipulates the complexities of empirical tests (evidences) and the procedures to obtain them (ABBUHL, GASS, MACKEY, 2013) aiming at trading with it until the point of placing evidence-standards at the level of any story. Regulators have to make decisions about very complex and influential issues, often very difficult to be exposing to the general public, and people may feel confused—if not disinterested—in the game of sides in favor and against several decisions and the alleged evidence on which those decisions are based, especially if they are not definitive. But most decisions are made in states of uncertainty and this may cast doubts on the public (see INTEMANN, 2015). Here is just where bullshit takes advantage to generate more confusion: it constructs statements that are not entirely false—i.e., they are not plain lies—but that inject deep deliberate flaws into discourse, in addition to a bias that accompanies them. The 'spokespersons' of the 'technoscience-regulation' interface will then expand the statements of the bullshitters along the ideological or partisan trajectory they favor, without considering any approximation to truth and trying to generate a growing critical mass.

[2] The second sort of bullshit is more general, if possible. This is the bullshit according to which the basis (empirical certainties) of regulatory decisions employing scientific claims is excessively limited. Why is this bullshit? Basically, because those who use it are actually using a universal standard created by themselves and named 'scientific'. In this same way, there has emerged an abundance of putative scientific fields in areas like religion (religious sciences), catering (kitchen sciences), occultism (occult sciences), and many other fields that are only technical in the best of the cases and pure mumbo-jumbo in the rest. The crucial thing for the bullshitter is to place under the ubiquitous label 'science' or 'knowledge' everything that suits her interests and pseudo-arguments. As it is well known, there is no evidence-standard that is empirically certain, complete, even in natural sciences. Therefore, we can figure out how far the bullshitters can go: they do not have a universal standard but pretend to possess and use one.

The aim of bullshitters therefore is to create confusion around the discussions and points of no interest for them. It is usual they assert that the empirical evidence that experts make known is not enough, so becoming impossible to make definitive judgments about anything. However, their most basic concern is of a different nature: when making regulatory decisions, a bullshitter may become puzzled by the possibility that technical or expert decisions will lead her to a bad way out, as far as technical regulatory decisions might collide with her profits. Instead of discussing on some topic at hand, however, she prefers to discredit the most probable evidences and hypotheses, as if they did not meet the standards of the most well-developed science. That is, as soon as the decisions of experts advance against the bullshitter side, the latter qualifies those decisions as ‘politicized science ’or as a mere story[7].

In this way, I suggest that these two sorts of bullshit recurrently appear in disputes and controversies about scientific policy and regulation. Further, they are not easily avoidable. The overly technical nature of evidence (sometimes even esoteric) opens many doors to the first mentioned sort of bullshit. The second can be more easily redirected, especially if studies on the nature of evidence, uncertainty, and error continue increasing (ORESKES, CONWAY, 2010).[8] This further complicates matters around science policy and decision making, but it is well known that complex problems usually do not yield simple solutions. There is no universal scientific standard, so this ‘insubstantial’ concept cannot serve as a ‘joker’, whatever name it is given (‘method’, ‘truth’, ‘certainty’, etc.).

The Golden Bullshit: Decisions on Climate Change

There are many scientific phenomena and facts relevant for scientific policy, related both to complex matters and to simpler ones. For example, in the case of the regulation of toxic substances, it is opportune—often necessary—to know and manage a lot of data and results in order to make decisions with a minimum of reliability and sense. Knowledge of animal toxicology, biochemistry of substances and other essential factors is required. The variability and quantity of phenomena and data are such that their handling, understanding, examination and use become actually difficult, expensive and require a long time for an adequate analysis. Even so, however, it seems that from an ethical point of view, scientists and regulators must take into account all those data and phenomena without leaving anything aside. Otherwise, regulation of, say, toxic substances or health claims on food would not be possible[9].

However, are our science policy agents and their advisers responsible (accountable) for something? Or do they simply serve their partisan and economic interests? Do they accept outcomes by default? Do decision committees take into account ‘non-policy’ actors and scientific experts? In the case of risky technologies, is the voice of those who are most subject to risk heard? It is not always that way. It is almost impossible to know exactly all the details of data and of the relevant phenomena in each case of technoscientific application, in addition to the fact that sometimes they may clash with the interests of politicians who hold the power for making decisions. For this reason, it is not always welcome to incorporate new actors in decision-making procedures. There is a tendency to leave 'secondary' actors—above all, the people that suffer the risks drawn from applications of dangerous technologies—aside and to focus on selected facts of greater direct relevance, rather than considering all available evidences (PATTEN, 2004, p. 177). I shall call these selected events ‘golden facts’.[10]

I claim that choosing golden facts may involve producing golden bullshit: something is chosen and valuated without a high quality justification. It can be clearly seen in the case of climate change. The complexity of the case requires scientists to examine not only the current climate, but also the climate of previous decades. It points to a study of climate variability—a necessary element for both climate measurements and forecast design. This requires taking into account the Earth's energy dynamics, which is obtained by accurate physico-chemical descriptions of the atmosphere, including many particles already identified as responsible for the greenhouse effect.

The amount of data and phenomena is such high that it is tempting to select golden phenomena. I pick up an example from Douglas (2006, p. 218), according to which the use of many climatic records serves to generate golden bullshit in discussions about it. Recall that the first reliable measurements of temperature are from the late 19th century. Based on these, it is known that between the 1890s and 1940s global warming occurred, followed by a cooling down period between 1940 and 1975. Subsequently, circa 1975, the Earth began to heat up again, and this process has continued until 2020. Furthermore, the same tendency is expected in the near future.

The aforementioned records hardly justify the claim that it is human beings who mainly cause climate change by generating, for example, greenhouse gases. In fact, the first known phase of heating begins with a slight increase in the amount of gases; the production of these actually increased after 1940, just when measurements indicate that the climate began to cool down. The question is therefore the following: if humans had affected the climate between 1940 and 1975—through a progressive production of gases—, why did temperatures decrease in that period?

I do not intend this question to be naïve, less if I pose it with an eye in the 1980s. But we live in 2020 and perhaps the issue may look sarcastic today. It seems to me that through it we enter the increasingly widespread game of bullshit. In the 1980s, scientists did not know what was the cause of Earth’s temperature decreasing between 1940 and 1975. Since 1990, however, research has delved into mechanisms and causes of climate and its changes, and has focused especially on aerosols. It is aerosol particles—including dust and sulfates—that cool the atmosphere down; they have a very short half-life, but their influence on general climate can be dramatic.

Research on aerosols provided successful predictions in many studies, among which a paradigmatic case is that of the eruption of the volcano Pinatubo (Philippines) in 1991. This eruption released an enormous amount of aerosols and by their study today it is well-known that combustible fossils produce the same effect that they do (DOUGLAS, 2006, p. 218) and also that aerosols cause acid rain. Subsequently, the social awareness originated in the 1970s around acid rain problems helped reduce the release of sulfates into the atmosphere, which allowed to reduce the production and use of aerosols that could potentially cool the climate down. This measurement reduced the excessive amount of aerosols in the atmosphere and, together with an increasing production of greenhouse gases, the temperature began to rise again.

Incorporating data about aerosols to studies of the climatic registry can be a drawback, since it requires great epistemic and time efforts. However, leaving them aside would generate something like a moral and cognitive ‘imposture’. That is, golden bullshit would be knocking the door. We would not leave out just a part of the irrelevant noise in data analysis, but a basic element of it, so that the regulatory decisions based on those results might be spurious. Let's not forget that in 1993 it was already possible to use scientific results on aerosols; Science journal published the best results in this regard and soon after, in 1994, Scientific American popularized the issue (see CHARLSON, WIGLEY, 1994). Not many valid excuses seem to exist in this case.

Honesty and moral excellence are not too widespread, unfortunately, and to this day many skeptics continue to resort to the period 1940-1975, as if the data about aerosols did not exist. They do not mention aerosols and systematically isolate certain facts from which they 'infer' bullshit theses and subsequently confuse the public. They wish to turn the critic’s facet into a bullshit task.

Fred Singer is an example of it. He has dedicated to creating doubts about the climate model, although just in relation to the period that he himself selects (1940-1975). He never mentions aerosols. Obviously, this does not mean that there are no problems affecting theories and models about climate change and the climate model; everybody knows they are not closed topics in any sense. For example, in the 1990s, the issue about the quality, reliability, and acceptability of temperature measurement models from satellites, and on Earth, raised very serious concerns. Satellites sent out twenty years earlier were focused on eliciting data from the past decade, but their results did not match or approximate temperature measurements on Earth. Satellite data did not indicate any relevant sort of warming between 1980 and 1995. But terrestrial data did. Taking advantage of this discrepancy, many bullshitters began to parody climate scientists. Anyway, eventually a complete analysis of all data has revealed that the satellite and terrestrial data coincide. The procedure for doing it, however, has also shown that the complete reliability of satellites is an ideal issue: they are not technologically perfect (they are fallible), obviously, and lead to measurement and estimation errors that can be highly relevant to climate predictions. Once the errors were detected and after carrying out a data analysis of them, it was shown that the increase in terrestrial temperature was real. The reevaluation of satellite data was made public in academic journals and in higher American institutions (National Academy of Sciences of the United States, for example). Why then are there people who continue to producing bullshit?

The appealing factor of bullshit based on golden facts is crucial for an answer. It is enriched by the use of fake news and rumors intermingled with the use of media¾especially on line social networks¾and partisan political strategies. In this scenario, technoscience becomes a body of knowledge that is continually transforming, being really difficult to know even a small part of its latest achievements. Thus, it is not possible to observe all evidence-elements, even if they belong to a very small part of a scientific sub-discipline. Human beings are intellectually limited—even if we think in terms of socially distributed knowledge—and this is an invaluable asset for those interested in manipulation and fraud.

One arduous way to attempt to avoid bullshit is to constantly resort to a critical analysis of a question of technoscientific and social interest. In this way, the path toward a public general deception is hindered. A proactive attitude to such an issue—the current high interest in climate change, for example—requires that we commit ourselves analytically to reflecting on it by using improved data and arguments, to the point of moving forward by eliminating at least the craziest hypotheses about it.

Conclusion

Scientists and experts inform science policy and regulation advisers and managers. It may look as if a state of ‘technosciencecracy’ has been reached whereby, whenever the administration is in charge of making decisions of scientific and social interest, specialized and sophisticated technical reflection and analysis procedures are proposed. However, as Micheal Gough points out, given that science and technology are part of our human society and culture, we must remain very attentive to the ideological contamination that experts—and analysts of expertise (meta-experts: sociologists, philosophers and so on)¾—ay suffer, since the more power political and ideological considerations have in decision-making, the easier it is for an ideological and partisan power that could weaken the scientific and sound nature of evidence-based knowledge to grow.

Gough's desideratum may be welcome, to be sure, but it must be carefully observed. It is laudable to attempt to remove the mask of politicized technoscience, but is Gough himself free of all ideology or politicization? (McINTYRE, 2019, p. 142).[11] What does ‘evidence-based scientific knowledge’ mean? Basically, we think about natural sciences and their sub-disciplines. But how do we know whether a set of evidence from the social or human sciences is solid or valid, once the theoretical burden of observation has been identified? Responding to this would amount to something akin to having an evidence standard, which has not been successfully developed to date.[12]

The politicization and ‘ideologization’ of technology and science can be interpreted at least in two different ways: the negative way, which equates ideology with bullshit and, the most positive, according to which politicization would only be the management of scientific policy and regulation. Today, it is necessary to continuously develop and apply ever better scientific policies and regulations attentive to the interests of both technoscience and society. Just one aspect of a good scientific policy is the refusal to erase the separation between these two opposite interpretations, since political management based on political interests and ideologies cannot be sold as if it were a policy based on science and technologies developed in epistemic and evidence-based frameworks. Again, it would be a fraud. The issue should be addressed by both regulators and those who attempt to think on it in order to both identify bullshit procedures into scientific policies and develop (and use) good intellectual arguments in their analyses.[13]

References

ABBUHL, R.; GASS, S.; MACKEY, A. Experimental research design. In: PODESVA, R.J., SHARMA, D. (eds.). Research Methods in Linguistics. Cambridge: Cambridge University Press, p. 116-134, 2013.

ACHINSTEIN, P. Evidence. In: PSILLOS, S., CURD, M. (eds.). The Routledge Companion to Philosophy of Science. London: Routledge, 2008. p. 337-348

ALLÈGRE, C. La sociedad vulnerable: doce retos de política científica. Barcelona: Paidós, 2007.

BALL, J. Post-Truth: How Bullshit Conquered the World. Londres: Biteback, 2017.

BENGOETXEA, J.B. A Gricean analysis of discoursive strategies in decision-oriented science: Bullshit, uncertainty, and meaning. Filosofia Unisinos, 18(1), pp. 24-35, 2017.

BERGER, M.; LISBOA, M. Ciencia regulatoria / Políticas de regulación basada en la ciencia. Desarrollos empíricos y conceptuales en perspectiva crítica. Administración Pública y Sociedad 7, p. 74-76, 2019.

BERNAL, S. Bullshit and Personality. In: HARDCASTLE, G.L., REISCH, G.A. (eds). Bullshit and Philosophy. Chicago: Open Court, 2006. p. 63-82

BLOCK, D. Post-Truth and Political Discourse. Cham: Palgrave-Macmillan, 2019.

BOK, S. Lying: Moral Choice in Public and Private Life. New York: Vintage Books, 1989.

BRITT HOLBROOK, J. Ethics, Science, Technology, and Engineering: A Global Resource. Farmington Hills, MI: Macmillan Reference USA, 2015.

BRUNNER, R. D. The Policy Movement as a Policy Problem. Policy Sciences 24, p. 65-98, 1991.

CARTWRIGHT, N. Evidence:For Policy, and Wheresoever Rigor is a Must. London: LSE, 2015.

CASTELFRANCHI, Y.; FERNANDES, V. Teoria crítica da tecnología e cidadania tecnocientífica: resistência, ‘insistência”e hacking. Revista de Filosofia Aurora, v. 27, n. 40, p. 167-196, 2015.

CHARLSON, R.J., WIGLEY, T. Sulfate Aerosol and Climatic Change. Scientific American v. 270, n. 2, p. 48-57, 1994.

COHEN, E. Science, Democracy, and Stem Cells. Philosophy Today, Supplement, p. 21-27, 2004.

COHEN, G.A. Si eres igualitarista, ¿cómo es que eres tan rico? Barcelona: Paidós, 2001.

DOUGLAS, H. Bullshit at the Interface of Science and Policy: Global Warming, Toxic Substances, and Other Pesky Problems. In: HARDCASTLE, G. L.; REISCH, G. A. (eds). Bullshit and Philosophy. Chicago: Open Court, 2006. p. 215-228.

FRANKFURT, H.G. On Bullshit. Princeton & Oxford: Princeton University Press, 2005.

FULLER, S. La ciencia de la ciudadanía: más allá de la necesidad de expertos. Diánoia, v. XLVIII, n. 50, p. 33-53, 2003.

FULLER, S. What Can Philosophy Teach Us About the Post-truth Condition. In: PETERS, M. A. et al. (eds.). Post-Truth, Fake News: Viral Modernity & Higher Education. Singapore: Springer Nature, 2018. p. 13-25

FUNTOWICZ, S. O.; RAVETZ, J.R. Uncertainty and Quality in Science for Policy. Dordrecht: Kluwer, 1990.

GALISON, P. Image and Logic: A Material Culture of Microphysics. Chicago: The University of Chicago Press, 1997.

GOUGH, M. Politicizing Science: The Alchemy of Policymaking. Stanford: Hoover Institution Press, 2003.

HARRIS, C. E.; PRITCHARD, M. S.; RABINS, M. J. Engineering Ethics: Concepts and Cases. Belmont: Wadsworth, 2009.

INTEMANN, K. Distinguishing between legitimate and illegitimate values in climate modeling. European Journal of Philosophy of Science, v. 5, p. 217-232, 2015.

KLINKE, A. et al. Precautionary Risk Regulation in European Governance. Journal of Risk Research, v. 9, n. 4, p. 373-392, 2006.

LAMBRIGHT, H. W. Science, Technology, and Public Policy. In: SHAFRITZ, J. M. (ed.). International Encyclopedia of Public Policy Administration. Boulder, Colo: Westview Press, 1998. pp. 2032-2036.

LATOUR, B. Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern. Critical Inquiry, v. 30, pp. 225-248, 2004.

LITOSSELITI, L. Research Methods in Linguistics. London: Continuum International, 2019.

McINTYRE, L. Post-Truth. Cambridge, Mass.: The MIT Press, 2018.

MICHAELS, D. Doubt is the Product: How Industry’s Assault on Science Threatens Your Health. Oxford: Oxford University Press, 2008.

MITCHAM, C.; FRODEMAN, R. New Dimensions in the Philosophy of Science: Toward a Philosophy of Science Policy. Philosophy Today, n. Suppl., p. 3-14, 2004.

NAGEL, S. S. Enclyclopedia of Policy Studies. New York: Marcel Dekker, 1994.

NEUMANN, V. Political Bullshit and the Stoic Story Of Self. In: HARDCASTLE, G. L.; REISCH, G. A. (eds). Bullshit and Philosophy. Chicago: Open Court, 2006. p. 203-213

ORESKES, N.; CONWAY, E. M. Merchants of Doubts: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury, 2010.

PATTEN, B. Truth, Knowledge, or Just Plain Bull: How to Tell the Difference. Amherst: Prometheus Books, 2004.

PENNY, L. Your Call is Important to Us: The Truth about Bullshit. New York: Crown, 2005.

SAREWITZ, D.; PIELKE JR, R. A. The neglected heart of science policy: reconciling supply of and demand for science. Enviromental Science & Policy, v. 10, n. 1, p. 5-16, 2007.

TODT, O.; RODRÍGUEZ, J.; FERNÁNDEZ DE LÚCIO, I. Valores no epistémicos en la ciencia reguladora y en las políticas públicas de ciencia e innovación. Argumentos de Razón Técnica, v. 13, p. 41-56, 2010.

Notes

[2] Mitcham and Frodeman (2004, p. 3) present a novel approach to this unification under the name of ‘philosophy of science policy’. Currently, the European Union (European Medicine Agency) is outlining the details of the 'Regulatory Science by 2025' strategy, whose purpose is to promote scientific excellence in regulation (in this case, in drugs regulation), although the project it is extensible to other areas. For more details on regulatory science, see Berger and Lisboa (2019).
[3] On this topic, there is an increasing number of publications addressed to analyzing the relationships between technoscience and the public. An interesting text from Feenberg’s critical theory perspective is Castelfranchi and Fernandes (2015), especially pp. 171-175.
[4] The Spanish version is entitled On Bullshit, the Portuguese title is On Bullshit: Sobre a conversa, o embuste e a mentira (Mem Martins: Bookout, 2019), and the French edition is called De l’art de dire des conneries: On Bullshit (Paris: Fayard/Mazarine, 2017).
[5] For a distinction, see Bok’s first chapter (1989).
[6] See especially Steve Fuller’s chapter 3 (2003, pp. 41-44).
[7] Along these lines, Fuller’s last two sections (FULLER, 2018) are a clear example of a ‘storyology’ aimed at seeing epistemological controversies as gross cases of partisan disputes, based upon socio-economically interests and completely removed from a 'truthful' use of empirical evidence.
[8] For controversies around tobacco companies discourses, see Michaels (2008).
[9] Obviously, this statement is an idealized ethical claim. Reality is more stubborn than this and forces us to use various auxiliary principles that help make decisions. Among them, it is worth highlighting the precautionary principle. Klinke et al. (2006, p. 375) point out that talking about precaution means that regulatory actions can be adopted in situations where there are potentially harmful agents that may induce harm to humans and environment, but where no conclusive empirical evidence on harmful effects is completely available.
[10] I propose it by an analogy to Galison's words (1977, p. 22). According to him, there is a scientific tradition that constructs so impressive golden facts that they are accepted from the beginning. A case in point is the positron image Anderson obtained in 1932.
[11] In an article published in Critical Inquiry, Bruno Latour shows his concern and regret for giving way to some confusing guidance about the truth on global warming. I follow McIntyre (2018) here, who recalls that Latour read in the New York Times that the Republican strategist Mr. Luntz [...] recommends emphasizing that the evidence is not complete, as well as continuing to make the lack of scientific certainty a matter of maximum relevance, in the face of which Latour (2004, p. 226-227) admits having defended that “there is no such thing as natural, unmediated, unbiased access to truth, that we are always prisoners of language, and that we always speak from a particular standpoint" […], “while dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives." Finally, he wonders whether he was perhaps confused "by participating in the invention of this field known as science studies" and why it hurts him the fact of admitting that "global warming is a fact, whether you like it or not."
[12] The literature around evidence-based scientific procedures has grown dramatically over the past decade and its results are beginning to spread beyond natural sciences and engineering. In the case of language studies, for example, see Podesva and Sharma (2013) and Litosseliti (2010). In philosophy, the analyses of the notion of evidence by Peter Achinstein (2008) and Nancy Cartwright (2015) are classic to this day, just to mention two of them.
[13] An example of a good argument is the one that Eric Cohen presents in ‘Science, Democracy, and Stem Cells’ (2004) around the stem cell controversies. On the other hand, 30 years ago Silvio Funtowicz and Jerome Ravetz valuably contributed to studies on the relationships between technoscience and uncertainty (FUNTOWICZ, RAVETZ, 1990). Unfortunately, there will always be (also in theoretical and academic settings) who are willing to apply their dose of bullshit, as is the case of Claude Allègre (2007) and his little argued attack on Funtowicz and Ravetz’s precautionary principle. The case of Steve Fuller (2018) and his defense of post-truth—nothing is true, there are only language games in which those who gain the strongest (who has more critical mass) usually win—seem to follow this same path, unfortunately.

Author notes

[a] PhD in Logic and Philosophy of Science (University of the Basque Country). Senior Lecturer at the University of the Balearic Islands.

I would like to thank for their financial support: European Commission’s European Regional Development Fund (FEDER)/ Spanish Ministry for Science, Innovation and Universities – State Research Agency (AEI)/ Research Project ‘Estándares de prueba y elecciones metodólogicas en la fundamentación científica de las declaraciones de salud’, FFI2017-83543-P.

HTML generated from XML JATS4R by