Total Drek

Or, the thoughts of several frustrated intellectuals on Sociology, Gaming, Science, Politics, Science Fiction, Religion, and whatever the hell else strikes their fancy. There is absolutely no reason why you should read this blog. None. Seriously. Go hit your back button. It's up in the upper left-hand corner of your browser... it says "Back." Don't say we didn't warn you.

Thursday, May 14, 2009

"Tell a Physicist to Suck It" Day Returns!

Some significant time ago I suggested that physicists were not as different from we lowly social scientists as they might like to believe. Sadly, however, I had little evidence to back this contention and had to rely on my usual scintillating intellect to make the point. Obviously, that didn't go well.

Today, however, I have run across a paper that, I think, accomplishes my objective much more effectively. It's a short piece from American Psychologist that compares research progress in physics to progress in the social sciences. And the results are not exactly what you might expect.

The paper is titled "How Hard is Hard Science, How Soft is Soft Science? The Empirical Cumulativeness of Research" (Hedges, Larry V. 1987. American Psychologist 42(2): 443-455) and it uses statistical meta-analysis techniques common to both physics and psychology to compare the empirical cumulativeness of particle physics to the empirical cumulativeness of several sub-areas of psychology. For those who are curious, the author (Hedges) argues for two types of cumulativeness: theoretical cumulativeness, being the degree to which theory gradually builds on, elaborates, and ultimately supplants earlier ideas, and empirical cumulativeness, or the extent to which research results are replicated through time. Given that the former type of cumulation is fairly subjective, Hedges focuses on the latter. Or, more precisely:

Experimental results frequently can be expressed as a numerical estimate of a parameter in a theoretical model, such as a mass, an energy, a correlation between variables, or a treatment effect. The consistency of these numerical estimates across replicated experiments can be assessed. A comparison of the empirical consistency of the results of replicated experiments in physics (as an example of a physical science) and in psychology (as an example of a social science) is the subject of this article.


His reasons for carrying out this exploration are more or less what you might expect- that we social scientists have a tendency to kick ourselves (when we're not being kicked by those meanies from the chemistry quad) because we're not a real science like physics or chemistry:

Psychologists and other social scientists have often compared their fields to the natural (the "hard") sciences with a tinge of dismay. Those of us in the social and behavioral sciences know intuitively that there is something "softer" and less cumulative about our research results than about those of the physical sciences. It is easy to chronicle the differences between soft and hard sciences that might lead to less cumulative research results in the soft sciences. One such chronicle is provided by Meehl (1978), who listed 20 such differences and went on to argue that reliance on tests of statistical significance also contributes to the poorer cumulativeness of research results in the social sciences. Other distinguished researchers have cited the pervasive presence of interactions (Cronbach, 1975) or historical influences (Gergen, 1973, 1982) as reasons not to expect a cumulative social science. Still others (Kruskal, 1978, 1981) have cited the low quality of data in the social sciences as a barrier to truly cumulative social inquiry. These pessimistic views have been accompanied by a tendency to reconceptualize the philosophy of inquiry into a format that implies less ambitious aspirations for social knowledge (e.g., Cronbach, 1975; Gergen, 1982). [emphasis added]


Remind anyone of anything? So, the question obviously goes, what did he find? Well, without boring you with excessive detail,* he finds that physics and psychology enjoy the same degree of empirical cumulativeness. Yes, that's right: physics doesn't produce any more precisely replicated results than one of us inferior social sciences:

What is surprising is that the research results in the physical sciences are not markedly more consistent than those in the social sciences. The notion that experiments in physics produce strikingly consistent (empirically cumulative) results is simply not supported by the data. Similarly, the notion that experiments in the social sciences produce relatively inconsistent (empirically noncumulative) results is not supported by these data either.


So does this mean that we should march on over to the physics building and taunt them? Nah. For one thing they have lasers. For another thing, the physical sciences do have an impressive record of success and I don't think we should take these findings to suggest that they don't.

But maybe, just for today, we should use this as a reason to remind ourselves that just because we don't work with capacitors, it doesn't mean we're not pretty cool scientists ourselves.


* Too late.

Labels: , , , ,

5 Comments:

Blogger Mister Troll said...

Drek,

As a real, live physicist, I have to say... your new holiday kicks ass :-) I'll have to check the online greeting card sites for something appropriate to send to several of my friends.

Anyway, I thought I'd have time to dig up the article, but alas, I have enough time to read blogs, but not enough to do that (oh, the irony). However, the argument---as given by your summary---seems... a bit flawed.

First, in the corner for the "Hard Sciences", is particle physics. A bit... specific? And then, for the "Soft Sciences", some sub-disciplines of psychology. Again, what? (Also, psych isn't so low on the scientific Mohs scale...)

I'm also baffled by the definitions used to define hard vs soft. I have always taken the terms to mean [trending to] quantitative vs qualitative, or more simply, physical vs social. I may be unaware of the extensive theoretical perspective which the authors bring to the study, but... I don't think what's people mean when they say the Hard/Soft Sciences...

Furthermore, how in the ding-dong blazes can anyone tell how often experimental results are replicated? The reality of publication is that repeated confirmations won't be published. The reality of scientific investigation is that experiments to specifically test certain concepts (or re-measure certain numbers) won't be carried out very often.

For example, I don't measure the electron's spin in my research, but the fact that it has spin-half is fundamental to most of my published results. I'd call that empirically cumulative, but I don't see how to quantify the frequency with which I've confirmed the electronic spin. (Zero: I haven't ever measured it in my research. Several: number of papers I've published for which the spin of e- must be 1/2. Thousands: number of measurements I've made in the lab for which the spin of e- must be 1/2.)

Or, if we're talking about particle physics, let's count the number of papers measuring the lifetime of some obscure nuclear resonance (n=not damn many), and compare to the number of times someone has performed fMRI on some particular brain lesion (m=not damn many, therefore both fields are equally cumulative!).

It sounds, frankly, like the authors are trying to quantify (ooh, that bad word) things that aren't fundamentally quantifiable. And that is, or ought to be, a no-no.

Anyway, if you think I've missed the point, or if you think I'm wrong, please post a brief reply ("physicists still suck" or something), and I will find (eventually) the time to dig up the article.

As for me, I don't think the hard sciences are inherently superior. More quantitative, yes, usually, but that's not a value judgment (though many hard scientists do make it a value judgment). In fact, after I got the PhD, I had a few days of feeling life was very purposeless. I resolved to go back to school to get an MS in one of the social sciences. Sadly, Mrs Troll put the kibosh on that, but I still kind of want to.

Friday, May 15, 2009 11:38:00 AM  
Blogger Drek said...

Hey Mister Troll,

I'll reply with your comment interspersed for clarity:

Anyway, I thought I'd have time to dig up the article, but alas, I have enough time to read blogs, but not enough to do that (oh, the irony). However, the argument---as given by your summary---seems... a bit flawed.Actually, I agree. In perfect fairness, however, the author doesn't so much pin his paper on the idea that it definitively settles an issue so much as the claim that it is possible to make a comparison of the sort. So, it's a methods piece moreso than a substantive piece. I go off in a related direction mostly to make a humorous point.

First, in the corner for the "Hard Sciences", is particle physics. A bit... specific? And then, for the "Soft Sciences", some sub-disciplines of psychology. Again, what? (Also, psych isn't so low on the scientific Mohs scale...)The author justifies the use of particle physics as an area whose results are sufficiently fundamental to other research as to invite significant precision and replication. I think he also makes a claim for a status rank in physics that places particle physics relatively high, but I can't speak to that as I know nothing about the discipline's pecking order. In terms of psych, he used a variety of sub-areas that have a range of "hardnesses," since psych is a fairly heterogeneous discipline. The "softest" of these sub-areas- Educational Psych- actually enjoys cross-over with Sociology. That said, however, EdPsych is hardly Sociology's most quantitative sub-area.

I'm also baffled by the definitions used to define hard vs soft. I have always taken the terms to mean [trending to] quantitative vs qualitative, or more simply, physical vs social. I may be unaware of the extensive theoretical perspective which the authors bring to the study, but... I don't think what's people mean when they say the Hard/Soft Sciences...You're placing your finger on a central difficulty in many kinds of science- how to define the critters under study. Presumably we generally associate physical with quantitative and social with qualitative- and it is true that social science by and large includes more qualitative research. At the same time, I'm a social scientist and spend a lot of time talking to mathematicians and computer science folks. Does that make me "hard" or "soft"? In any case, I think the author is mostly talking about "hard" and "soft" as a way of motivating the discussion and he takes the idea that physics is more rigorous than psychology as a sort of primitive concept to be studied.

Furthermore, how in the ding-dong blazes can anyone tell how often experimental results are replicated? The reality of publication is that repeated confirmations won't be published. The reality of scientific investigation is that experiments to specifically test certain concepts (or re-measure certain numbers) won't be carried out very often.You'll want to read the article for this one. The short answer is he's only using results that appear in the journals in some fashion- which either means they agree with past research or, alternatively, that they disagree in an interesting way. For physics he's basing a lot of his data on the meta-analyses produced by the Particle Data Group.

For example, I don't measure the electron's spin in my research, but the fact that it has spin-half is fundamental to most of my published results. I'd call that empirically cumulative, but I don't see how to quantify the frequency with which I've confirmed the electronic spin. (Zero: I haven't ever measured it in my research. Several: number of papers I've published for which the spin of e- must be 1/2. Thousands: number of measurements I've made in the lab for which the spin of e- must be 1/2.)Looks to me like he's basing it on specific attempts to measure a quantity. That said, the issue you describe cuts both ways as there are some incredibly robust processes that appear in social scientific research even when we don't measure them deliberately.

Or, if we're talking about particle physics, let's count the number of papers measuring the lifetime of some obscure nuclear resonance (n=not damn many), and compare to the number of times someone has performed fMRI on some particular brain lesion (m=not damn many, therefore both fields are equally cumulative!).Ah. This may be a source of confusion- he's not so much measuring sheer number of replications as the degree of agreement between them. Essentially, he's interested in the standard errors surrounding our estimates rather than in the n's that produce those estimates. He's arguing that if you can't agree on an answer then in some sense you aren't cumulating.

It sounds, frankly, like the authors are trying to quantify (ooh, that bad word) things that aren't fundamentally quantifiable. And that is, or ought to be, a no-no.Since when is "quantify" a bad word? Dude, I use a lot of math in my work. I think I would agree with you that the characteristics that distinguish physics from psychology are not all amenable to quantification. On the other hand, there's no reason you can't compare estimates of some theoretically important quantity in each discipline and see which one ends up with greater dispersion.

Anyway, if you think I've missed the point, or if you think I'm wrong, please post a brief reply ("physicists still suck" or something), and I will find (eventually) the time to dig up the article.I don't think you're wrong, but I do think that you might benefit from reading the article. Drop me an e-mail if you want and I'll send you a .pdf. Importantly, however, I have great respect for physics and was really just having fun with this article. Neither the author of the original article, nor myself, were claiming to definitively show that physics and psych are equally awesome. Rather, I think this article should be taken as suggesting that psychology may be more robust than is usually thought. Put differently, the common perceptions of psych as a softer, less reliable science may not be as close a match to reality as usually thought.

I find it useful to point this out less because I want to convince physicists to take us seriously (First off, you're starting to and, secondly, that isn't really important) and more because some folks in my own discipline have this silly notion that just because social science is difficult we should give up on doing it well. My response to this is "bullshit". Nothing worth doing is easy and figuring out how we operate socially is definitely worth doing.

Friday, May 15, 2009 12:23:00 PM  
Blogger Mister Troll said...

Drek,

I did find a few minutes to skim the article in question (but it's from issue 5, not 2... /picky)

So, it's a methods piece moreso than a substantive piece. About this paper not being substantive... now that I have to agree with! :-) I kind of want to eviscerate this paper, but the author did address criticisms at the end. I felt those criticisms outweighed the rest of the paper; the author/editors/reviewers did not.

I think he also makes a claim for a status rank in physics that places particle physics relatively high. In terms of pecking order, the author is probably right. Particle physics is probably at the top. But... who cares? The author made a rather insulting comment about the "best and brightest" physicists "presumably" being in particle physics. Even if true, it's utterly irrelevant to the thesis of the paper. The rest of us poor, dumb physicists apparently are more likely to screw up our measurements and thus have less disciplinary replicability?

Since when is "quantify" a bad word? My sarcasm didn't come through correctly in what I wrote. I don't think it's a bad word. But many Natural Scientists do sneer at the the Social Sciences, and I would argue the issue is one of quantitation. The Social Sciences include many less-quantitative results. Thus, the sneering and also the tendency [of social scientists] to kick [themselves] because [they're] not a real science.

(Let me again emphasize that I personally like the social sciences. I do, perhaps wrongly, perhaps not, think they are less quantitative in general, but that is not a bad thing.)

I think I would agree with you that the characteristics that distinguish physics from psychology are not all amenable to quantification. Here we agree, but... On the other hand, there's no reason you can't compare estimates of some theoretically important quantity in each discipline and see which one ends up with greater dispersion. and here we don't. The problem with this comparison is both definitional and methodological (what are you comparing, and how can you actually do it). Having made the comparison, what does it mean?

Finally, you neglected to link to the best PhD comics strip, which totally explains the social sciences: http://www.phdcomics.com/comics/archive.php?comicid=908

Tuesday, May 19, 2009 6:38:00 AM  
Blogger Mister Troll said...

Drek,

I was surprised to run across a mention of this exact article in the Skeptical Inquirer (May-June, 2009, p. 28. The author of that piece seemed to be much of your opinion. My university library has an online subscription; the same might be true at yours, if you're interested.

-MT

Saturday, June 06, 2009 7:30:00 PM  
Blogger Drek said...

Hey,

I subscribe, actually, and noticed that as well. Interesting article and, obviously, better written than my own!

As a side issue, I didn't respond to your last comment only because I think it boils down to some basic differences in opinion that aren't amenable to continued discussion. I think the author was trying something new and reasonable people can disagree about the utility of the result. I enjoy your comments, but know that the internet can have a polarizing effect, and so decided to just stand pat on our discussion and let others make up their own minds.

Thanks for the shout-out about S.I., however!

Sunday, June 07, 2009 7:41:00 AM  

Post a Comment

<< Home

Site Meter