Monday, 3 December 2012

IMGAME fieldwork

This semester has seen an impressive amount of IMGAME fieldwork and, as usual, our Local Organisers have proved to be thoroughly excellent. Since the end of October we have run Imitation Games on religion in Cardiff, Trondheim and Helsinki, with a trip to Rotterdam scheduled for the week beginning 10 December.

This is an important period for us in two ways. First, the results will provide a crucial test of the method and the underlying theory. The kinds of comparisons we are going to make (and the hypotheses we are trying to test) can be summarised as follows:

  • Cardiff fieldwork: this is a repeat of the fieldwork we did in March 2012 so the results should be very similar (i.e. it is a kind of test-retest reliability measure). By using the same questions at Step Two but collecting new Pretender answers and then having them judged by new judges we should get a measure of how stable the IR is over time. We have also tried filtering the questions (i.e. selecting the best 50% of question based on how they were interpreted by Step One judges in March) to see if this makes a difference to the IR estimate.
  • Scandinavian fieldwork: the intention is to compare the ability of non-Christians to pretend to be active Christians with the success of non-Christians in Poland and Sicily. We have already recorded IRs of close to zero in the predominantly Catholic countries, so we are now hoping to measure a significant, positive IR in Norway and Finland.
  • Rotterdam fieldwork: This is broadly similar to the Scandinavian fieldwork and also tests the ability of non-Christians in a broadly secular country to pretend to be active Christians. Overall, we expect the IRs in Norway (Trondheim), Finland (Helsinki), Netherlands (Rotterdam) and Wales (Cardiff) to be roughly similar to each other and for there to be a statistically significant difference between these four IRs and those measured in Poland (Wroclaw) and Sicily (Palermo).
If the data supports the hypothesis, then we will have made a big step forward in developing the method.

The second reason this semester's fieldwork is important is that we are using it to test and develop the new software being developed by redweb. This is essential to the long term viability of the method as it will make it much easier for new researchers to run Imitation Games as many of the tasks we now perform manually will be automated. This should make the whole fieldwork experience much quicker and also eliminate some of the inevitable data handling errors that creep in as files are cut and copied and pasted by hand.

We will post provisional results in early 2013, assuming the Judges set their files back to us in time. In the meantime, I must end by, once again, thanking the Local Organisers for their hard work, resourcefulness and generally excellent choice of restaurants!

New Publication

The second Hawk-Eye paper is now available in hard copy. The full reference is:


Collins, H.M. and Evans, R.J. (2012) ‘Sport-Decision Aids and the ‘CSI Effect’: Why cricket uses Hawk-Eye well and tennis uses it badly’, Public Understanding of Science, Vol. 21, No. 8 (November 2012), pp. 904 – 921.

It was first published via Online First on July 29, 2011 as doi: 10.1177/0963662511407991.

Wednesday, 15 August 2012

Harry Collins elected Fellow of the British Academy

Professor Harry Collins has been made a Fellow of the British Academy for his role in establishing the sociological study of science. More details at: http://t.co/SmCOGyAy

Thursday, 21 June 2012

Imitation Game article in “The Atlantic”


The Atlantic – a premier US magazine for high end journalism in areas like technology and society – has just published an article on the Imitation Game. http://bit.ly/LkAHCO

The article describes the Imitation Game’s links to the Turing Test, the importance of interactional expertise and the development of the IMGAME project.

Please read it and tell your friends and colleagues!

Wednesday, 13 June 2012

SEESHOP 6 and LO Workshop (Cardiff, 7-10 June 2012)


Last weekend saw the 6th annual SEESHOPconference and the 2nd ever Local Organisers (LO) workshop for the IMGAME project

The LO workshop was held on 7 June and brought together the postgrads and postdocs who help us run the IMGAME project by booking rooms, recruiting participants and generally making sure that we are able to collect the data we need. The aim of the LO workshop was to discuss developments of the methods, share hints and tips on how to manage the fieldwork and discuss some of the preliminary results from the first year.

After the LO workshop, the sixth SEESHOP got underway on Friday 8 June. The conference was a mixture of new participants and old-timers from previous years. The quality of the papers presented was uniformly high and we had lots of really good discussions about expertise and its use in settings ranging from management consultancy, industrial training and the courtroom to the natural and neurological sciences. We also had some lively debates about the relationship between expertise and democracy and on whether or not ‘nudge’ theory could (or should) be used to reduce the chance of maverick or marginal science influencing lay decision-making.
 
The programme is not available on-line at the time of writing but a list of speakers and titles is given below:

Friday 8 June

Paper Session 1: Developing Expertise

Julie Williamson – Experts among us: interactional expertise among management consultants
Simon Williams – Visualization of experiments
Paulo Marques and Rodrigo Ribeiro– Training for development of tacit knowledge

Saturday 9 June

Workshop: DemocraSEE

Alain Bovet – What can SEE bring to the pragmatist approach to the political public
Jean Goodwin – Adopting Walter Lippmann as an ancestor of 3rd Wave
Rob Evans – Habermas, Rawls & Democracy

Paper Session 2: Interactional Expertise

Kathryn Plaisance – Interactional expertise and philosophy of science
Eric Kennedy – A pluralistic approach to interactional expertise

Sunday 10 June

Paper session 3: Expertise Ignored

Robert Jomisko – Reforming science policy reform
Luis Reyes Galindo– Bogus molecular detectors

Paper Session 4: Experts about experts

David Caudill – Reliability standards in law and SEE
Theresa Schilhab – The anatomy of interactional expertise

Paper Session 5: Managing Expertise

Sally Jackson – Design Requirements for Safely Deferring to the Experts
Evan Selinger and Kyle Whyte – Nudging expertise

Friday, 1 June 2012

Experts and Consensus (Bayreuth, 25-26 May 2012)

Another week, another conference! It will be the last foreign trip for a while, but was a good one to finish with.

The conference was a small workshop organised by Carlo Martini on the topic of Experts and Consensus in the Social Sciences. The meeting was held at Beyreuth University and consisted of 12 papers from a mixture or philosophers, sociologists and economists in which both the costs and benefits of consensus and different ways of reaching it were explored. The common themes that emerged across the presentations were the importance of institutions in hosting expert interactions, the value of pluralism and hence of not reaching consensus and the difficulty of devising procedural rules for combining different judgements from what appear to be ‘epistemic peers’.

The full programme, with abstracts, is available on-line. For what it is worth, the presentations that related most directly to my own interests and thus prompted me to re-consider my own views were as follows:

  • Maria Jiminez Buedo [] spoke about the difficulty of recognising legitimate expertise in times of controversy and began with the example of Alessio Rastani, who was interviewed by the BBC and prompted a barrage of complaint and speculation about whether he was a real expert or not. The interview is here: watch it and make your own mind up! The BBC news article about the controversy is here. The more general issue is, of course, even if you accept a realist theory of expertise, the problem of how to recognise these experts is only raised, not resolved.
  • Merel Lefevere presented a paper that responded to Heather Douglas’s claim that scientists (and presumably other experts) have a moral duty to contextualise their advice by making reference to limits and problems that can be reasonably foreseen. The difficulty is figuring out what can be reasonably foreseen and Lefevere and her co-author (Eric Schliesser) argued that a more pluralistic approach to expertise, particularly by those responsible for aggregating expert opinion or advice, was needed. I must confess that I think the argument just shifts the problem from defining what can be ‘reasonably’ foreseen to what counts as ‘reasonable’ pluralism (i.e. it doesn’t solve the Problem of Extension) though I am sympathetic to the general idea that advice should reflect the diversity of views available within the relevant expert community.
  • Frank den Butter who I first met 15 years ago at the 10th Anniversary Congress of the TinbergenInstitute, presented a paper that drew on his experiences of working with policy-makers in the Netherlands. The paper stressed the need for ‘compromise,’ rather than ‘consensus,’ and argued that this was a critical factor in reaching agreement about policy options. He illustrated the argument with examples of four consultation exercises based on ‘matching zones’ stakeholders and argued that seeing these events as a ‘game of trust’ – in which repeated face-to-face contact plays an important role – was crucial to their eventual success. In many ways, the these stakeholder exercises could be seen as ways of institutionalising the ‘political phase’ of a technological decision-making in the public domain.
  • Carlo Martini focused on the technical phase, rather than the political phase in a talk that compared the way expert groups were selected based on the nature of the task they had been set. In doing so he emphasised a theme that emerged several times over the two days, namely the importance of institutional design to the problem of expertise. In this case, the two contrasting examples were the Bank of England’s MonetaryPolicy Committee and the US Boskin Commission on CPI.
  •  Laszlo Kosolosky also raised issues that relate to the distinction between the technical and political phases. In arguing that consensus might be over-rated, in the sense that disagreements and their rationales might also be important, he distinguished between an academic consensus and an interface consensus. This struck me as particularly interesting as it provided a way of clarifying something about the SEE approach to expertise, namely that there is an expert community (scientific or otherwise) in a particular domain that might have reached a consensus about some topic. In Kosolosky’s terms this would be an academic consensus, though in old-fashioned SSK we might call this a core-group. In contrast, the interface consensus relates to the collective views of a diverse group of experts, each perhaps representing different core-groups who act as advisers (i.e. at the interface between between expert groups and policy-making). As such pluralistic groups are less likely to reach consensus, especially if the concept it taken to imply a greater degree of buy-in than simple agreement, then Kosolosky argues that what is important is not so much the outcome (consensus or not) but an agreement on the procedures to be followed when managing interactions at the interface. In other words, it is the institutional arrangements that are crucial rather than the achievement of consensus itself.

Of course, this is not to say that the other presentations were not interesting. They were but the ones listed above are the ones that raised questions that relate directly to the Studies in Expertise and Experience perspective.