Sunday, March 3, 2013

Judgment without "facts"


A jury was unable to reach a verdict at a recent high profile trial of the wife of a disgraced ex-politician who had been accused of the obstruction of justice. The jury came in for much ridicule for the questions they asked of the judge while deliberating. The lawyer for the prosecution,

 “questioned whether the case could continue. “I don’t ever recollect getting to this stage in any trial – even for more complicated trials than this – and after two days of retirement a list of questions of this very basic kind illustrating at least some jurors don’t appear to have grasped it,” he said.”

I myself do not share the view that these were all silly or irrelevant questions, although one was somewhat funny and got a funny answer in return:

[Jury]: Can you define what is reasonable doubt?
[Judge]: A reasonable doubt is a doubt which is reasonable. These are ordinary English words that the law doesn’t allow me to help you with beyond reasonable written directions.

But my main interest is the jurors’ question that captured the headline in at least one daily newspaper:

[Jury]: Can a juror come to a verdict based on a reason that was not presented in court and has no facts or evidence to support it either from the prosecution or the defence?
[Judge]: The answer to that question is firmly no. That is because it would be completely contrary to the directions I have given.

But in assessing a situation we often rely on evidence that is not “factual” in the literal sense but may be factual in that it speaks to our faculties of judgment. It is absurd to believe that we do not frequently come to doubt whether someone is telling the truth simply by studying their body language, or the hesitation in the voice or because when we gaze into the eyes, there was something that jarred with the ‘factual’ story being told.

I myself have been a juror on two occasions and can testify that these, though not facts presented in court and do not constitute evidence presented by the prosecution or the defence, nevertheless play an important role in reaching a decision. I believe that, in addition to the evidence presented in court, my co-jurors used the same or similar signals in reaching our common verdict.

I have also asked judges whether, when presiding over a case, they use visual cues which are not facts presented in court to reach a judgment as to whether the defendant is innocent or guilty. They have always answered that it plays an important role. Of course in most such cases, the judge can leave the final verdict to the jury but there have been cases where the judge has disagreed with the jury. Somerset Maugham even wrote a very interesting short story about the consequences of such a divergence of views when the judge in a case meets the (acquitted) accused years later at a dinner party (I read it years ago and cannot now recall its name).

The final verdict must depend significantly upon whether a witness or the accused is telling the truth or lying and, in judging that, many factors besides the evidence presented in court come into play – in the form of signals that the brain receives and interprets but which do not constitute part of the body of evidence presented in court.

My point is that the brain is very good at picking up signals that do not constitute ‘evidence’ in the legal sense but are nevertheless vital in reaching judgment.

Nor am I saying something new. Everyone knows that we make judgments based on objectively non-factual but subjectively critical evidence, nearly every day.

Hence, to ridicule the jurors’ question given above is silly. It is indeed as silly as the statement attributed to Picasso, which I alluded to in a previous post, that when we love a woman we don’t start by measuring her limbs.” The truth is of course otherwise; we actually start by making very detailed, often nearly instantaneous but perhaps partially unconscious, measurements of a great deal before we fall in love.




Thursday, January 10, 2013

The "New Phrenology"

 
Labeling something often suggests a haste to catalogue it and be done with it. It also implies some level of understanding of that which is labeled. But labels, especially pejorative ones, also commonly help to insulate one from the need to enquire further. Why would anyone who has labeled something as “trash”, for example, be bothered to read or learn anything further about it?

Every now and then, someone who is seemingly exasperated by the profusion of neurobiological facts describing a localization of some function or other in the brain, labels the whole enterprise as nothing more than the manifestation of the “new phrenology”. Nor does such labeling come only from those outside the field; sometimes the same dismissive label is used by neurobiologists themselves.

Essentially, (the old) phrenology supposed that mental faculties are localized in the brain and that an especially well developed mental faculty would result in a corresponding bump in the skull. By measurement of the skull and its bumps one would therefore be able to infer something about character, moral qualities and personality. Its originator was Franz Josef Gall, who took refuge in France after his ideas had been disapproved of in Austria and was shocked when the Institut de France, at the instigation of Napoleon, did not promote him to membership. 

There were some good reasons for dismissing phrenology and especially the use that was later made of it to promote racists ideas. But there is nothing wrong with its implicit assumption that the brain is the seat of the mind.

Those who label the tendency of modern neurobiological research to find that special cortical areas are associated with distinct functions as nothing more than a manifestation of the “new phrenology” do both the subject and themselves a disservice. That distinct cortical areas are associated with distinct functions does not mean that they can act in isolation; indeed all cortical areas have multiple inputs and outputs, both to other cortical zones as well as to sub-cortical stations and the healthy activity of an entire system is critical for a specialized area to execute its functions. It is trite to suppose, as some (non-scientists) have, that an area that is specialized for a special function, for example colour vision, can be isolated from the rest of the cortex or the brain and still mediate the experience of colour. No biologist has ever made such a claim and those outside biology who make it know nothing of biology or the brain.

It is equally untrue that the whole of the brain is involved in all its functions, as was believed in the 19th century. No one could possibly deny that there is an area of the brain that is specialized for vision or for some attributes of vision, such as visual motion; nor can anyone deny that there are areas of the brain that are specialized for audition. Nor would any reasonable person want to deny that lesions in these different zones of the cerebral cortex have different consequences.

More recently, with the advent of brain imaging studies, neurobiologists have shown that even the experience of subjective mental states does not mobilize the entire brain with equal intensity. Rather the results of such studies commonly show that a set of areas is especially involved in some subjective state or another. But activity in the areas comprising that set does not necessarily correlate only with one subjective state. An area of that set may do “double” or “multiple” duty and be active during the experience of several subjective states, even contradictory ones. But one nevertheless commonly finds that the set of areas especially active during some experiences is different from the set of areas active in another, or in other, subjective experiences, even if they share common areas.

This, of course, is a far cry from those who, usually anxious to stigmatize the findings of neurobiology, write of neurobiologists as having discovered the “love spot” or the “beauty spot” in the brain. Or to dismiss them as nothing more than “modern phrenologists”.

The “unity” of mind

A fertile terrain for questioning the localizationist claim – that cortical areas with characteristic histologies and specific sets of inputs and outputs can be associated with special functions - lies in the so-called “unity of mind” which makes us act holistically.

But let those who ridicule the efforts of neurobiologists consider what has been the greatest success of cortical studies on the one hand and what has been its greatest failure on the other.

The greatest success - which almost links the history of cortical neurobiology in one unbroken thread – is the association of special functions with distinct cortical areas. This theme has run through cortical studies since the day in April 1862 when Broca announced that the third left frontal convolution is critical for the production of articulate language.

The greatest failure has been its inability to account for how these specialized areas “interact to provide the integration evident in thought and behavior” as the American neuropsychologist Karl Lashley put it in the 1930s. He also added, however, that just because the mind is a unit, it does not follow that the brain is a unit.

Those who dismiss all these “localizationist” studies as nothing more than a “modern phrenology” may want to ask why neurobiology has failed so miserably just when it might have been expected to succeed spectacularly in light of its findings.

Perhaps a good first step in this enquiry would be to stand back – even if momentarily – and ask whether the mind is an integrated unit after all. The answer may come as a surprise.

Thursday, January 3, 2013

Old age and the biology of hate

In his last speech to the House of Lords as Archbishop of Canterbury, Rowan Williams lamented society’s attitude towards older people. He said: "It is assumptions about the basically passive character of the older population that foster attitudes of contempt and exasperation, and ultimately create a climate in which abuse occurs" and referred to estimates that a quarter of the older population is abused one way or another.

This comes against ghastly stories of the mis-treatment of older people by their nurses in old peoples’ homes, often verging on outright cruelty, stories that are repeated annually throughout the country, and probably mirroring similar stories in many other countries as well.

I believe that the Archbishop showed wisdom and compassion in choosing the theme for his last speech and in speaking up for older people, but he did not go far enough in his analysis.

I have long wondered whether we are not biologically programmed to dislike and even hate older people for being older, just as we seem to be biologically programmed to love vulnerable and defenseless young children just because they are younger. The latter merit our attention and care while the former our avoidance and, where occasion permits, our cruelty and mis-treatment of them.

I have no scientific evidence for this belief, though there might be such evidence somewhere. But if my analysis is correct, or turns out to be correct, then it is not that we have “assumptions about the basically passive character” of older people that leads to their mis-treatment, as the Archbishop believes, but something biological and therefore much more difficult to control.

Of course, the hatred is probably more easily directed against those older people who are not members of the family, or at least the immediate family. But even in that context, older people are not immune. In the Prologue to his autobiography, Bertrand Russell wrote that one of the things that had made him suffer was the sight of “helpless old people a hated burden to their sons”. 

If we are biologically programmed to dislike older people at best and hate them at worst, especially when they are not members of our family, then it is right, as the Archbishop suggested, that they should be given some kind of state protection, for example by appointing a national Older People’s Commissioner.

Society does, after all, police other biological urges that are difficult to control. It is perhaps time to introduce severe punishment for those who heap so much misery on the helpless in our society.

But that of course leaves another aspect which society simply cannot control. The dislike of old people, and their avoidance, are no doubt the source of much misery and alienation for them, and I just don’t know how society can combat that. We cannot, after all, legislate against dislike though we should be able to do so against its consequences

Monday, October 22, 2012

The Pursuit of Sensational Science

-->
Much has been written recently about sensationalizing science, by hyping things up, exaggerating the importance or novelty of new findings and giving over-simplistic accounts of them.

Up to a point scientists themselves are responsible for this. Referring to a molecule implicated in certain behaviours as the “moral molecule” obviously invites criticism, and has done so. But referring to the Higgs boson as the “God particle” has not attracted the same criticism, for reasons that are difficult to understand.

The term “God particle” was not given by scientists. As I understand it, a distinguished scientist wrote a book entitled “The Goddam Particle” but his publishers thought the title too offensive. Of course, what the title implied was “that goddam particle is so elusive, and therefore so difficult to find”. But apparently the publishers changed the title to the God Particle, and it has stuck ever since. The God particle of course has a different sense altogether – even a religious sense. But I have not heard scientists disavowing the name, with all its sensationalist associations. Instead they seem to have reveled in it.

And just recall how the Higgs boson, or the God particle, was announced recently. A press conference was announced some weeks before, keeping all guessing as to what would be revealed at the conference. Rumours became rife and were quashed, adding to the tension and the sensationalism. Scientists seemed to revel in it.

Compare that with the announcement of the current status of junk DNA, which was not accompanied by all the fanfare. Instead, the results were published in 2-3 journals. The discovery nevertheless attracted widespread attention in the press and was on the whole well summarized for the lay public – a far better example of scientific conduct.

To an equal extent, the current scientific culture is fertile ground for  sensationalism and indeed encourages it. All scientists, especially young ones aspiring to a good appointment, yearn to publish in “high impact” journals. I recently saw an advertisement for a research position at a very distinguished university. It said that “the successful candidate will have a proven ability to publish in high impact journals”.

Notice, it said nothing about a proven ability to do good or rigorous science, but only to publish in high impact journals. And it is common knowledge that what gets published in high impact journals is very variable, and not always the best science.

And how does one publish in high impact journals?

Well, one way is to do good science. But another way is to do sensational science.

Some of these high impact journals now screen a submission before sending it out for peer review and informed opinion. I know of one journal which rejects as follows:

“This is not meant as a criticism of the quality of the data or the rigor of the science, but merely reflects our desire to publish only the most influential research.”

Read it well, for it says it all: the science may be rigorous and good but, in our opinion, it is not influential research.

In other words, we only want to publish the most sensational research.

And this is not the only journal that pre-screens articles before deciding to send them out for peer review.

I do not believe that this does science any good service.

Who, after all, decides what is influential research but future generations.

And so we come full circle:

To get a good job, you have to publish in high impact journals.
To publish in high impact journals, you have to do sensational science. 
Good science helps but is not the determining factor. It must, in the opinion of those who may not be especially versed in the subject, be influential.

And the boundary between sensational science and exaggeration is….rather thin.

Friday, September 21, 2012

Good money for bad art

This is getting better and better!

A really shabby and botched restoration of a minor work in a small church in Zaragoza, Spain, by an unknown artist (?)/ restorer (?), Cecilia Gimenez, was hailed by many as a real contribution to contemporary art, although it is only fair to add that many others laughed at it. I believe that a description of it as "an intelligent reflection of the political and social conditions of our times" is not far off the mark (lots of laughs here).

After attracting so much attention, it has of course become a celebrity - and celebrity status ultimately leads in only one direction -- money, lots of it.

And according to today's Guardian, this is exactly what is happening.

Now, after the church started to rake in the cash by charging the multitudes who came to view this bizarre restoration, which makes Jesus look like a hairy monkey, the restorer herself wants a cut of the cake. After all, at 4 euros per admission, this is not an insignificant sum. Hilarious.

See, I told you, if a curator of contemporary art had been wise and bought the work outright (when it would have presumably been sold for a song), all this money would now be flowing in a different direction.

Thursday, September 20, 2012

Philip Roth, Wikipedia and Oscar Wilde


Philip Roth was understandably annoyed when he wanted to correct a mistake in the Wikipedia entry regarding his book, The Human Stain. Apparently, they did not want to publish his correction about who had inspired his book. While acknowledging that the author of a book is an authority on his or her book, they nevertheless wanted a “secondary source”. Roth addressed them in a letter to the New Yorker and they have since apparently accepted that Roth is an authority on his own book and corrected the mistake.

Of course, the delusion is to suppose that there are necessarily any “secondary sources” in Wikipedia or that there ever can be, given the nature of the enterprise. Many who write entries for it are, naturally enough, interested in the topic about which they write. But many are also interested in themselves and in projecting their own contributions. This results in self-serving and inaccurate articles. In that sense, they are not “secondary sources”, weighing the facts dispassionately or presenting different sides of an argument or different interpretations.

I must say that I frequently consult Wikipedia for this or that, and think of it as a very worthwhile enterprise, one which at the very least guides those who want to learn more. But I never accept its authority on any important matter. It is sheer folly to rely on Wikipedia in any work of scholarship. Of course, one can modify Wikipedia entries. But is it worth the time and effort, when you know that it is not necessarily reliable, and when you know that, in a work of scholarship, you can never quite rely on it?

I have alluded to this before. What the present spat between Wikipedia and Philip Roth highlights is the illusion of “secondary sources”.

Perhaps Wikipedia should adopt as its motto a saying attributed to Oscar Wilde (I read it somewhere but cannot remember where and cannot be sure that the words below are exactly what he wrote, but they are pretty close):

“If you tell the truth, sooner or later you are bound to be found out”

Wednesday, September 12, 2012

Titian and Clint Eastwood


The small but great National Gallery exhibition of three Titian masterpieces displayed side by side for the first time since the 18th century was a real delight. One of the three, The Death of Acteon, has been at the National Gallery for years; the other two (Diana and Callisto and Diana and Acteon) were only recently purchased for the nation for about £95 million and will be exhibited alternately in Edinburgh and London.

Acteon is of course doomed from the moment he sees Diana (the goddess of hunting) bathing in all her naked splendour. And the curators have used the occasion to have a real naked woman bathing, whom one can only see through a keyhole. It is quite an imaginative innovation, though it must be tiring for the women (I gather there is a change of women every two hours). 

Peeping through a keyhole implies spying on something that is forbidden or at any rate not on public view. It is a fitting complement to the voluptuous and erotic masterpieces of Titian (they were in fact exhibited for men only in the king’s private apartments in the royal palace in Madrid).

The penalty for spying visually on Diana was death. And the penalty for spying on a naked woman through a keyhole is…..?

Isn’t contemporary art designed to make us think about such things, about our relation to the woman seen through the keyhole in this instance? Or about being a peeping Tom in a public place? Or about exhibitionism? Or about secret fantasies? 

This was certainly more interesting than gazing vacuously at beach pebbles and filing cabinets.

While this exhibition was on, another potential exhibit for a museum of contemporary art came to my notice, though no one has commented on it in that context, as far as I can tell.

It was Clint Eastwood talking to an empty chair (it starts at about 03:33) He was addressing the chair as if President Obama had been sitting on it. But there was of course no President Obama.

What would one call it – a Surrealist creation, a Dadaist creation? Conceptual art?

This dialogue between a living actor and an absent President – who could, in the imagination, be almost anyone – is also more interesting than beach pebbles and filing cabinets. In fact, I have actually seen empty chairs in museums of contemporary art that do not arouse nearly as much interest as Clint Eastwood’s empty chair, which is a good deal more imaginative.

I suggest that it would be a good exhibit at a museum of contemporary art. It stimulates the imagination more than the current empty chairs in some art museums. Some museum should rush to buy the copyright. It has, after all, attracted more than half a million viewers in about two weeks - and hence must be the envy of many a gallery.

And those who revile Clint Eastwood’s creation must at least acknowledge that it disturbed them enough to want to revile it.

In other words, it made them think.

Which is a good deal more than can be said for many exhibits in museums of contemporary art.