Are you over 18 and want to see adult content?
More Annotations
A complete backup of liebenswert-magazin.de
Are you over 18 and want to see adult content?
A complete backup of booksnclicks.com
Are you over 18 and want to see adult content?
A complete backup of reypeliculas.com
Are you over 18 and want to see adult content?
Favourite Annotations
A complete backup of zenyogacenter.com
Are you over 18 and want to see adult content?
A complete backup of imperia-hotel.com
Are you over 18 and want to see adult content?
A complete backup of supplychaindive.com
Are you over 18 and want to see adult content?
A complete backup of recruitexpress.com.sg
Are you over 18 and want to see adult content?
A complete backup of cellmedicine.com
Are you over 18 and want to see adult content?
A complete backup of rootyhillrsl.com.au
Are you over 18 and want to see adult content?
Text
Skip to content
* Home
* Books
* Blogroll
* Sponsors
* Authors
* Feed
“AMERICA IS USED TO BLAMING INDIVIDUALS FOR SYSTEMIC PROBLEMS. LET’S TRY TO AVOID THAT THIS TIME.”Posted by Andrew
on 13 April
2020, 2:04 pm
I like this news articleby Aviva Shen:
> In normal times, policing has been America’s primary response to a > host of societal ills that cannot be solved by punishment. > Homelessness, mental illness, violence, racism, poverty, and toxic > masculinity are all fed through the criminal justice system, rather > than getting addressed in any meaningful way, never mind resolved. > Now we expect police officers to contain this virus for us, even as > they too become infected. Even though trying to arrest our way out > of this just puts everyone in more danger. . . .>
> The problem with the way the government has failed to contain this > crisis is that now, stopping the coronavirus has shifted to be a > matter of personal responsibility. . . .>
> When the virus first entered the U.S., the people who could have > done something to contain it did nothing. They didn’t even bother > to track it. Meanwhile, South Korea and Taiwan immediately > instituted widespread testing, created “quarantine hotels,” and > deployed their robust social safety nets to ensure everyone could > have basic needs met while staying indoors. Those countries have > seen far lower infection rates than the U.S. . . .>
> Letting fear guide our responses won’t lead to measures that make > anyone safer. And when a policy response is designed around the > worst of the worst—the exceptions—we get nonsensical and brutal > systems. . . . This reminds me of some things we’ve been talking about lately about the tension between individual and group decision making. To return to Shen’s article, another thing we’ve been thinking about a lot is how wrong many of us have been in our expectations of the coronavirus epidemic. I read this comprehensive news article from a few days ago by Yasmeen Abutaleb, Josh Dawsey, Ellen Nakashima and Greg Miller about how unprepared the country has been: “From the Oval Office to the CDC, political and institutional failures cascaded through the system and opportunities to mitigate the pandemic werelost”:
> It did not have to happen this way. Though not perfectly prepared, > the United States had more expertise, resources, plans and > epidemiological experience than dozens of countries that ultimately > fared far better in fending off the virus. . . . Warnings were > sounded, including at the highest levels of government, but the > president was deaf to them until the enemy had already struck.>
> The Trump administration received its first formal notification of > the outbreak of the coronavirus in China on Jan. 3. Within days, > U.S. spy agencies were signaling the seriousness of the threat to > Trump by including a warning about the coronavirus — the first of > many — in the President’s Daily Brief. And yet, it took 70 days > from that initial notification for Trump to treat the coronavirus > not as a distant threat or harmless flu strain well under control, > but as a lethal force that had outflanked America’s defenses and > was poised to kill tens of thousands of citizens. That > more-than-two-month stretch now stands as critical time that was> squandered.
But it was not just the president. As of January, even through late Februrary, I had no sense of the severity of the problem. Sure, I wasn’t as bad as this guy,
but that’s a pretty lowstandard.
Why were I and others so wrong? I think because of a natural and unavoidable division of labor, which is that that most of the time we think about our own individual decisions, with the assumption that larger organizations will work out group and societal decisions. In January and February, there wasn’t much for any of us to do as individuals, and even the individual decisions we could make—for example, wearing masks or buying extra food for our pantry—were not going to have much direct effect on us and weren’t particularly urgent in any case. But, as we’ve learned, these decisions, in addition to societal decisions such as not to institute mass testing for the virus, were urgent and consequential at the national andglobal levels.
Nassim Taleb and Joseph Norman put itwell:
We’ve been trained and coached and nudged: (a) to think of larger social outcomes in terms of the aggregation of individual decisions, and (b) when thinking about social decisions, to have a very narrow palette of options, pretty much limited to policing, wars, and throwing money at a problem. Recall the newspaper headlines during those couple of weeks in March when it became clear that coronavirus would be a major problem in this country: lots of talk about the stock market, followed by lots of discussion of how many hundreds of billions of dollars would be thrown at the problem. I’m not saying that massive government spending is a bad idea here, just that it was bit of a simplification to focus on the How much? question, rather than on the What’s being done?question.
Again, my point here is not to criticize others. I’ve pretty much been going through the same sequence of thoughts as a lot of other people on this one. I’m a statistician, but it’s not like in January I was screaming that we need a billion test kits. There weresome Cassandras
back then, but I wasn’t one of them. The social science in this post of mine is largely introspective: What were the mental roadblocks getting in the way of clear thinking for me? I think part of it was that my implicitly individualistic perspective. Hence my appreciation for the above-linked article by Shen’s. P.S. I made the mistake of looking at the comment thread on that Slate article. Oof! Seeing that sort of thing makes me realize how lucky we are to have the commenters we have here. P.P.S. Speaking of Taleb, I just saw thisgreat line:
> Do not conflate doing science and being nice and polite to> scientists.
AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis6 Filed under Political Science,
Public Health
,
Sociology
, Zombies
. 17
Comments
CONSIDERATE SWEDES ONLY DIE DURING THE WEEK.Posted by Phil
on 13 April 2020, 1:55 pm Reported Coronavirus deaths in Sweden, by date. This post is by Phil Price, not Andrew. A lot of people are paying attention to Sweden, to see how their non-restrictive coronavirus policies play out. Unlike most other countries in Europe, they have instituted few mandatory measures to try to slow the spread of the virus. Instead, they’ve taken a ‘softer’ approach, telling people the risks and asking people to make good choices. And people are certainly changing behavior: according to Google’s mobility reports,
as of April 5 the use of transit stations was down 37%, workplaces were down 10%, and retail establishments were seeing 25% less traffic. So there’s definitely some ‘social distancing’ going on, although not nearly as much as, say, Norway (retail down 60%, workplaces down 32%). So…what’s the result? How do deaths in Sweden compare to other countries? Well, on paper they’re doing OK, with deaths doubling every 5 days. That’s not as good as their neighbors (Denmark, Finland, and Norway are all around 6-7 days, and that’s a difference that adds up, or rather multiplies up, over the course of a month or two) but it’s about the same as Belgium and, well, hard decisions have to be made and conceivably the Swedes could feel that this is the right balance of economy versus deaths. But: I don’t trust the numbers coming out of Sweden. See the attached plot of coronavirus deaths by date. I think we can all agree that people are dying on weekends, they’re just not being reported. If this were just some clerical thing, like deaths not being counted until the clerks show up at the office on Monday, then we would expect that either the numbers would be corrected over time, or that there would be a big spike on Mondays when the weekend deaths are counted, but we don’t see either. (The plot is from Worldometersand the
numbers match those that are reported daily in the New York Times). As the world tries to figure out how to manage an end to the shutdown that’s in place in many countries, data from countries like Sweden that are doing things differently should be very valuable. But only if we can trust the data. Anyone have any idea what is going on with the death count in Sweden? I don’t. This post is by Phil. AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis15 Filed under Public Health. 26
Comments
JOHN CONWAY
Posted by Andrew
on 13 April
2020, 9:03 am
Around 25 years ago I was at a conference at Princeton, in whatever building housed the math department at the time. One of the sessions looked like it would be kinda boring, so I took a stroll down the hallway and came to a lounge, a cozy little place of the sort that you’ll see in an out-of-the-way corner of a university, with a couple of couches and tables and a shelf with a mix of old books and recent journals. The room was empty except for a couple of students working quietly in a far corner and a heavyset bearded middle-aged man playing with some blocks. I took a seat nearby and he explained whathe was doing.
He had 27 identical wooden blocks—they weren’t cubes, I guess you’d call them rectangular parallelepipeds? I didn’t need to know the word for it because I could see the blocks in his hand—along with a wooden box that could hold all them, if they were fit in just right. If the boxes had dimensions a x b x c, then the box had dimensions (a + b + c)^3. (Again, this didn’t need to be explained, because the pieces were right there in front of us.) The blocks can just fit into the box in some grid (e.g., (((a, b, c), (b, c, a), (c, a, b)), etc.)), but it’s not just a combinatorial challenge (I’d say a Sudoko-like challenge but this was before I’d heard of that particular puzzle) but also a geometrical challenge, because if you just try to cram the pieces in the box any which way, they’ll interfere with each other. It’s also a cool puzzle because (a) if you can fit the pieces in, there will be room to spare, some empty space in the interstices, and (b) all the 27 pieces are identical, andhow cool is that?
The bearded heavyset man did not actually say, “how cool is that?” Instead, he pointed out to me in his English accent that the problem is challenging, and by contrast the two-dimensional version (with 4 identical rectangles) is trivial (as indeed it is, as you can see from a moment’s reflection). He also claimed that the four-dimensional version isn’t hard to solve—somehow you just put together two 2-dimensional solutions—and he said that he didn’t know if the five-dimensional version had a solution. I thanked him and left the room. We never had any introductions; he just jumped in and showed me the puzzle, that was it. A few minutes later I was running this episode through my mind . . . Princeton mathematician . . . English accent . . . puzzles . . . it must have been John Conway! But I didn’t try to track him down or ask for his autograph or whatever. Why ruin the perfect moment? Later I was visiting my parents—my dad had a workshop in the basement and I decided to make a version of the puzzle for myself. In the version Conway showed me, each block was about the size of my hand, but to reduce the effort I decided to make something smaller. The only constraint in making these a x b x c blocks is that, if a < b < c, it's necessary that 4a > a + b + c. Otherwise it’s possible to cheat and squeeze the blocks in four at time. I chose a, b, c to be in the proportions 4, 5, 6. So I got a big board, sawed it into pieces, sanded the to be just right, and stained them. For the box I got some pieces of plastic and taped them together at their edges. Then I sat down to solve the puzzle. It took me a couple hours! Indeed it’s not so easy. I guess I could’ve done it quicker by taking some notes while I was doing it and working through by process of elimination. In any case, when I finally did solve it, it wassatisfying.
I still have that wooden puzzle. It’s in my office. I doubt Conway invented it. But I thank him for showing it to me. Also I thank him for coming up with the game of life. Talk about cool. P.S. Terence Tao reportsthat
“Conway was fond of hanging out in the Princeton graduate lounge at the time of my studies there, often tinkering with some game or device, and often enlisting any nearby graduate students to assist him with some experiment or other.” So maybe Tao was one of those students quietly working in that far corner. Some good stories in the comments to that post too. P.P.S. Reading Tao’s post more carefully, I notice this > He challenged me to a board game he recently invented . . . > I still remember being repeatedly obliterated in that game, which > was a healthy and needed lesson in humility for me (and several of > my fellow graduate students) at the time. This is a funny story for a couple of reasons. First, Conway invented the damn game; it should be no surprise he could beat you at it, right? Second, when my friends and I were in grad school, we were in awe of the faculty. No lessons in humility were needed. So it’s funny for me to think that these grad students needed to lose a board game in order to get that feeling that we had all along. I guess this is related to the point made by Dick DeVeaux that math is like music, statistics is like literature.
Math students come in with the ability to do everything, but statistics students are aware that there’s a world of things tolearn.
AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis3 Filed under Miscellaneous Science.
6 Comments
USUAL CHANNELS OF CLINICAL RESEARCH DISSEMINATION GETTING SOMEWHAT CLOGGED: WHAT CAN GO WRONG – DOES. Posted by Keith O’Rourkeon 12 April
2020, 3:10 pm
A few weeks ago I was an observer on the OHDSI Covid19 study-a-thon (March 26 – 29). Four days of intensive collaboration among numerous clinical researchers working with previously established technology to enable high quality research with data access up to 500 millionpatients.
Current status here . This is a good summary of what happened: “I am extremely proud to see what our community accomplished, but we are well aware that this is merely the BEGINNING STAGE OF A LONG RESEARCH AGENDA,” said George Hripcsak, MD,
MS, the Vivian Beaumont Allen Professor and Chair of the Columbia Department of Biomedical Informatics. “Our international network is committed to continuing work in this area until this pandemic has ended.” . In clinical research the devil is in the details and about a week after the study-a-thon, the first pre-print was ready to share (April5). Yeah!
Submitted to medRxiv it’s appearancewas
delayed by almost a full week :-(Not ideal!
Now, dozens of other studies by the group need to be finalized and disseminated, but it is worrisome that the usual channels of clinical research dissemination seem to be getting somewhat clogged. What is likely is happening is anyone and their brother who can get their hands on 50 or so patient’s data are quickly doing some sort of analysis and submitting a not so high quality paper. Now, I am biased but this international group has access to unbelievably large data sets and a tested out methodology to do better than average studies – COOPERATIVELY. As always, the initial dissemination of study’s results needs sometime for other experts to digest it, raise concerns about what might be wrong and suggest ways to mitigate that. Delays in this process are regrettable. This post is by Keith O’Rourke and as with all posts and comments on this blog, is just a deliberation on dealing with uncertainties in scientific inquiry and should not to be attributed to any entity other than the author. As with any critically-thinking inquirer, the views behind these deliberations are always subject to rethinking and revision at any time. AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis3 Filed under Public Health. 4
Comments
THE CHECKLIST MANIFESTO AND BEYONDPosted by Andrew
on 12 April
2020, 9:06 am
A few years ago, I was motivated to write about the intervention and the checklist: two paradigms for improvement,
after reading Atul Gawande’s classic checklist manifesto. My angle was that in statistics we are trained to think about interventions (and studying their causal effects), but intervention is not the onlyparadigm.
Gaurav Sood sends along the following contextualization of thechecklist issue:
> We fail because we don’t know or because we don’t execute on > what we know (Gorovitz and MacIntyre). Of the things that we don’t > know are things that no else knows either—they are beyond > humanity’s reach for now. Ignore those for now. This leaves us > with things that “we” know but the practitioner doesn’t.>
> Practitioners do not know because the education system has failed > them, because they don’t care to learn, or because the production > of new knowledge outpaces their capacity to learn. Given that, you > can reduce ignorance by (a) increasing the length of training, (b) > improving the quality of training, (c) setting up continued > education, (d) incentivizing knowledge acquisition, and (e) reducing > the burden of how much to know by creating specializations, etc. On > creating specialties, Gawande has a great example: “there are > pediatric anesthesiologists, cardiac anesthesiologists, obstetric > anesthesiologists, neurosurgical anesthesiologists, …”>
> Ignorance, however, ought not to damn the practitioner to error. If > you know that you don’t know, you can learn. Ignorance, thus, is > not a sufficient condition for failure. But ignorance of ignorance > is. To fix overconfidence, leading people through provocative, > personalized examples may prove useful.>
> Ignorance and ignorance about ignorance, however, are not the only > reason we fail. We also fail because we don’t execute on what we > know. Practitioners fail to apply what they know because they are > distracted, lazy, have limited attention and memory, etc. To solve > these issues, we can (a) reduce distractions, (b) provide memory > aids, (c) automate tasks, (d) train people on the importance of > thoroughness, (e) incentivize thoroughness, etc.>
> Checklists are one way to work toward two inter-related aims: > educating people about the necessary steps needed to make a decision > and aiding memory. But awareness of steps is not enough. To > incentivize people to follow the steps, you need to develop > processes to hold people accountable. Audits are one way to do that. > Meetings set up at appropriate times during which people go through > the list is another way. Sood also shares some interesting notes he prepared on Gawande’s classic checklist manifesto. Here are some excerpts from Sood’s notes: > Portions of the book suggest that this is less about checklists and > about engineering processes that reduce errors. Any process can be > called a checklist—you do X followed by Y followed by Z—but that > is stretching it.>
> . . .
>
> Problem Statement: How do you make sure that people know and are > following the process correctly? > To improve outcomes—study routine failures and how you would amend > the process to improve outcomes.>
> . . .
>
> Americans today undergo an average of seven operations in their > lifetime, with surgeons performing more than fifty million > operations annually—the amount of harm remains substantial.>
> . . .
>
> “On average, the study reported, it took doctors seventeen years > to adopt the new treatments for at least half of American> patients.”
>
> . . .
>
> Concern w/ Some Checklist Implementations And Some Solutions>
> – Incentives for following checklists may be weak. How do you get > people to follow? > – social pressure — checklist publicly marked as in a surgery > communicate clearly the issues and evidence on the efficacy of> checklists
> – get people to own the checklists—put their name, get their > ideas on it — induce accountability > – checks of whether the stuff was followed and incentives and > rewards based on that. > – People may stop using their brain and just follow the checklist > -how to put in checklists that clarify that brain cells are imp. and > incentivize that.>
> – Train people to use checklists AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis3 Filed under Decision Theory,
Public Health
. 16
Comments
THE FALL GUY, BY JAMES LASDUNPosted by Andrew
on 11 April
2020, 12:57 pm
I just finished this book. I’d write something about it, but I did a search and found this review by John Lennon from a couple of years ago. I have nothing to add. AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis4 Filed under Literature. 7
Comments
GIVEN THAT 30% OF AMERICANS BELIEVE IN ASTROLOGY, IT’S NO SURPRISE THAT SOME NONTRIVIAL PERCENTAGE OF INFLUENTIAL AMERICAN PSYCHOLOGY PROFESSORS ARE GOING TO HAVE THE SORT OF ATTITUDE TOWARD SCIENTIFIC THEORY AND EVIDENCE THAT WOULD LEAD THEM TO HAVE STRONG BELIEF IN WEAK THEORIES SUPPORTED BY NO GOOD EVIDENCE.Posted by Andrew
on 11 April
2020, 9:35 am
Fascinating article
by Christine Smallwood, “Astrology in the age of uncertainty”: > Astrology is currently enjoying a broad cultural acceptance that > hasn’t been seen since the nineteen-seventies. The shift began > with the advent of the personal computer, accelerated with the > Internet, and has reached new speeds through social media. According > to a 2017 Pew Research Center poll, almost thirty per cent of > Americans believe in astrology. . . .>
> In its penetration into our shared lexicon, astrology is a little > like psychoanalysis once was. At mid-century, you might have heard > talk of id, ego, or superego at a party; now it’s common to hear > someone explain herself by way of sun, moon, and rising signs. > It’s not just that you hear it. It’s who’s saying it: people > who aren’t kooks or climate-change deniers, who see no > contradiction between using astrology and believing in science. . .> .
I did a quick search and indeed found this Pew reportfrom October, 2018:
The only real surprise about this table to me was the religious breakdown. I had the vague sense of mainline Protestants as being the sensible people, but they have the same rate of believe in astrology as the general population. But, hey, I guess they’re normal Americans (on average) so they have normal American beliefs. Also surprising that only 3% of atheists believe in astrology. I guess this makes sense, but it somehow seemed plausible to me for someone to not believe in God but believe in other supernatural things: indeed, I could imagine astrology as a sort of substitute for a traditional religious system. But I guess not. Also interesting when we think about the promoters of junk science inacademia.
I’ve analogized
Brian Wansink to an astrologer who can make savvy insights about the world based on some combination of persuasiveness and qualitative understanding of the world, and then attribute his success to tarot cards or tea leaves rather than to a more prosaic ability to synthesize ideas and come up with good stories. But does Brian Wansink actually believe in astrology? What about Marc Hauser, Ed Wegman, Susan Fiske, and the whole crowd of people who like to label their critics as “second-string, replication police, methodological terrorists, Stasi,” etc? I doubt they believe in astrology, as that represents a competing belief system: it’s an industry that, in some sense is an alternative to rah-rah Ted-talk science. I wouldn’t be surprised if prominent ESP researchers believe in astrology, but I also get the sense that mainstream junk-science promoters in academia and the news media don’t like to talk about ESP, as those research methods are uncomfortably close to theirs. They don’t want to endorse ESP researchers, as that would discredit their own work by association, but they don’t want to throw them under the bus, either, as they are fellow Ivy League academics, so their safest strategy is just keep to quiet about thatstuff.
The larger point, though, is not belief in astrology per se, but the state of mind that allows people to believe in something so contradictory to our scientific understanding of the world. (OK, I apologize to the 29% of you who are not with me on this one. You can return to the fold when I go back to posting on statistical graphics, model checking, Bayesian computation, Jamaican beef patties, etc.) It’s not that, a priori, astrology _couldn’t_ be true: As with embodied cognition, beauty and sex ratio, ovulation and voting, air rage, ages ending in 9, and all the other Psychological Science / PNAS classics, we can come up with reasonable theories under which astrology is real and spectacular—it’s just that after years of careful study, nothing much has come up. And the possible theories out there aren’t really so persuasive: THEY’RE BANK-SHOT MODELS OF THE WORLD THAT COULD BE FINE IF THE GOAL WAS TO GAIN UNDERSTANDING OF A REAL AND PERSISTENT PHENOMENON, BUT NOT SO CONVINCING WITHOUT THEEMPIRICAL EVIDENCE.
Anyway, the point is that if 30% of Americans are willing to believe this sort of thing, it’s no surprise that some nontrivial percentage of influential American psychology professors are going to have the sort of attitude toward scientific theory and evidence that would lead them to have strong belief in weak theories supported by no good evidence. Indeed, not just support for _particular_ weak theories, but support for the general _principle_ that we should be nice to pseudoscientific theories (although, oddly enough, maybe not forastrology
itself).
P.S. In defense of the survey respondents (though not of the psychology professors who support ideas such as the “critical positivity ratio” which make astrology look positively sane in comparison), belief in astrology (or, for that matter, belief in heaven, the law of gravity, or the square-cube law) is essentially costless. Why not believe, or disbelieve, these things? In contrast, belief or disbelief in evolution or climate change or implicit bias has potential social or political effects. Some beliefs are essentially private, while others have more direct policyimplications.
I have less tolerance for prominent academic and media figures who aggressively support junk science when they don’t just express their belief in speculative theories supported by no real data, but then go on the attack against people who point out these emperors’ nudity. In addition, even a hypothetical tolerant, open-minded supporter of junk science—the sort of person who might believe in critical positivity ratio but also actively support the publication of criticisms of that work—can still do a certain amount of damage by diluting scientific journals and the news media with bad science, and by promoting sloppy work which reduces space for more carefulresearch.
You know how they say that science is self-correcting, but only because people are willing to self-correct? Similarly, Gresham’s law is real, but only because people are willing to circulate counterfeit bills, or to circulate money they suspect may be counterfeit, while keeping their mouths shut until they can get rid of their wads of worthless stock. P.P.S. Just to be clear: No, I don’t _know_ that astrology is a waste of time, and it could be that Marc Hauser was onto something real, even while he was fabricating data (according to the U.S. government, as quoted on Wikipedia), and the critical positivity ratio and ovulation and voting and all the rest . . . all these could be real—who knows! Just cos there’s no good evidence for a theory, that doesn’t make it false. I don’t want to suppress any of these claims. Publish it all somewhere, along with all the criticism of it. My problem with the promoters of junk science is not just that they promote science that I and others consider to be junk—we can be wrong!—but that they continually dodge, suppress, and fight against legitimate open criticism. P.P.P.S. Again, #notallpsychologists. And of course the problem of junk science is not restricted to psychology, not at all. To the extent that professors of political science, economics, sociology, history, are strong believers in astrology or spoon bending or whatever (that is, belief in “scientific” paranormalism as describing some true thing about the natural world, not just an “anthropological” recognition that paranormal beliefs are something that can affect the world because people believe in it), this could screw up their research too. If a physicist or chemist believes in these things, I guess it’s not such a big deal. And, again, I’m not trying to suppress research into astrology, embodied cognition, ESP, beauty-and-sex-ratio, bottomless soup bowls, spoon bending, Bible Code, air rage, ovulation and voting, subliminal smiley faces, etc etc. Let a thousand flowers bloom! The point of this post is that, given that there’s a large chunk of the population that’s willing to believe in scientific-sounding theories that are not backed by any strong scientific theory or evidence, it should be no surprise that many professional scientists have this attitude. The consequences happen to show up particularly strongly in psychology, as this is a important field of study where theories can be vague and where there’s a long tradition of belief and action backed up by shaky data. That doesn’t mean that psychologists are bad people; they’re just working on hard problems, in an academic tradition that has a lot of failures in its history. Again, this is not a criticism, it’s just the way it is. And of course there’s a lot of great work being done in psychology. You have to work with the history you have. AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis24 Filed under Miscellaneous Science,
Sociology
, Zombies
. 122
Comments
A BETTER WAY TO VISUALIZE THE SPREAD OF CORONAVIRUS IN DIFFERENTCOUNTRIES?
Posted by Andrew
on 10 April
2020, 9:01 am
Joel Elvery write:
> Long-time listener, first-time caller. I’m an economist at the > Federal Reserve Bank of Cleveland.>
> I think I have stumbled on to a very effective way to visualize and > compare the trajectories of COVID-19 epidemics. This short post>
> describes the approach and what we learn from it, but the graph > above is enough to give you the gist.>
> I’m trying to get the word out to other people who are graphing > COVID-19 data in case they also find this approach useful. Continue reading ‘A better way to visualize the spread of coronavirus in different countries?’ » AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis108 Filed under Public Health,
Statistical graphics.
72 Comments
UPHOLDING THE PATRIARCHY, ONE BLOG POST AT A TIMEPosted by Andrew
on 9 April
2020, 9:29 am
A white male writes: > Your recent post>
> reminded me: partly because of your previous posts, I spent a fair > amount of the last two years reading Updike, whom I’d never read > before. It was time well spent. Thank you for mentioning him in your > blog from time to time. I find early Updike to be uniformly good, but later Updike novels are more uneven. I liked Rabbit is Rich and many of the later stories and criticism, but I tried to read Roger’s Version once and found it unreadable, even incompetently written. AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis7 Filed under Literature. 9
Comments
BIG TROUBLE COMING WITH THE 2020 CENSUSPosted by Andrew
on 8 April
2020, 9:10 am
OK, first things first. For readers of this blog who live in the United States: Don’t forget to fill out your census. They’re doing in online, and you should’ve received a letter in the mail last month telling you how to do it. And now the news. Dr. Z points us to this post by Diana Elliott and Robert Santos, “Unpredictable Residency during the COVID-19 Pandemic Spells Trouble for the 2020 Census Count.” Elliott and Santos write: > Just before lockdowns were implemented across the country, there was > tremendous movement and migration of people relocating to different > residences to shelter in place. This makes sense for the people > involved but could be disastrous for the communities they fled and > the final 2020 Census counts.>
> The 2020 Census, like most data collected by the US Census Bureau, > is residence based. . . . Most residences across America have > already received their 2020 Census invitation. Whether completed > online, by paper, by phone, or in person, the first official > question on the 2020 Census questionnaire>
> is “How many people were living or staying in this house, > apartment, or mobile home on April 1, 2020?” Households are > expected to answer this based on the concept of “usual residence> ,”
> or the place where a person lives and sleeps most of the time.>
> Despite written guidance provided on the 2020 Census on how to > answer this question, doing so may be wrought with complexities and > nuance from the pandemic.>
> First, research reveals that respondents do not often read > questionnaire instructions> ; they
> dive in and start answering. With many people scrambling to other > counties, cities, and states to hunker down for the long haul with > loved ones, this will lead to incorrect counts when people are > counted at temporary addresses.>
> Second, for many, the concept of “usual residence” has little > relevance in the uncertainty unfolding during the COVID-19 pandemic. > What if your temporary address becomes your permanent address? What > does “usual residence” mean during a global epidemic that could > stretch for 18 months or more> ?
> And perhaps more importantly, what should it mean? . . .>
> The US Census Bureau must act. It will need more processing time to > identify and remove duplicates in the returns—a phenomenon that > occurs regardless>
> of a pandemic—and will need to flag potential population spikes in > certain communities. . . .>
> Unfortunately, communities face a zero-sum game for decennial > population counts. Communities that gain population in 2020 because > of the pandemic will reap the benefits of better funding and > representation for the next decade. Communities with population loss > will receive less than they deserve.>
> Every census count brings new challenges, some of which lead to > miscounts that get brought to and battled over in court. Without a > proactive approach to the 2020 Census that addresses these residency > questions, the COVID-19 pandemic may be unintentionally inviting > communities to wage contentious court battles over the accuracy of > the count for years to come. A few weeks earlier, Santos and Elliott had written a post, Is It
Time to Postpone the 2020 Census?: > We know that COVID-19 testing in the US has proven inadequate, and > community spread has now taken hold. The virus has spread to 45 of > 50 states as of March 12, 2020, and it’s reported to be 10 times > more lethal than influenza and much more contagious. . . . Although > the decennial census is mandated by the Constitution, the extreme > challenges raised by the pandemic may warrant an unprecedented delay > to protect the census’s accuracy. These challenges include:>
> Difficulty finding and retaining enumerators . . .>
> Making hard-to-count populations even harder to count . . .>
> Lacking planning or protocols for conducting the census during a > pandemic . . .>
> Should the census be postponed? Extended? Canceled? . . . Regardless > of the option, it is hard to imagine that the 2020 Census could > simply go on as scheduled. Some hard decisions face the US Census > Bureau. The health of our democracy may be at stake. We should’ve listened to them back on March 13th. P.S. A commenter suggests they change the census form to clarify responses for people who are temporarily housed elsewhere because ofcoronavirus.
The problem is that the census form was already written, and I guess they don’t want to change the form in the middle of the Census. At this point they’d have to redo the whole thing, which I guess is what they should do, but that would be expensive. Also there are winners and losers from every change, and the winners from the current system might want to keep things as is. Finally, there are people outside and inside the government (but presumably not in the Census Bureau itself) who want to “drown the government in the bathtub” etc., and for them I guess it’s a plus if the census is a failure, as it will reduce legitimacy of future governmental actions. This is related to the War on Data that Palko and I wrote about a few years ago and which Palko rebloggedrecently.
AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis11 Filed under Political Science.
91 Comments
WEBINAR ON APPROXIMATE BAYESIAN COMPUTATIONPosted by Andrew
on 7 April
2020, 11:26 pm
X points us to this online seminar series which is starting this Thursday! Some speakers and titles of talks are listed. I just wish I could click on the titles and see the abstractsand papers!
The seminar is at the University of Warwick in England, which is not so convenient—I seem to recall that to get there you have to take a train and then a bus, or something like that!—but it seems that they will be conducting the seminar remotely, which is both convenient and (nearly) carbon neutral. AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis6 Filed under Bayesian Statistics,
Statistical computing.
13 Comments
“THE GENERALIZABILITY CRISIS” IN THE HUMAN SCIENCESPosted by Andrew
on 7 April
2020, 9:09 am
In an article called The Generalizability Crisis, Tal Yarkoni writes:
> Most theories and hypotheses in psychology are verbal in nature, yet > their evaluation overwhelmingly relies on inferential statistical > procedures. The validity of the move from qualitative to > quantitative analysis depends on the verbal and statistical > expressions of a hypothesis being closely aligned—that is, that > the two must refer to roughly the same set of hypothetical > observations. Here I argue that most inferential statistical tests > in psychology fail to meet this basic condition. I demonstrate how > foundational assumptions of the “random effects” model used > pervasively in psychology impose far stronger constraints on the > generalizability of results than most researchers appreciate. > Ignoring these constraints dramatically inflates false positive > rates and routinely leads researchers to draw sweeping verbal > generalizations that lack any meaningful connection to the > statistical quantities they are putatively based on. I argue that > failure to consider generalizability from a statistical perspective > lies at the root of many of psychology’s ongoing problems (e.g., > the replication crisis), and conclude with a discussion of several > potential avenues for improvement. I pretty much agree 100% with everything he writes in this article. These are issues we’ve been talking about for awhile, and
Yarkoni offers a clear and coherent perspective. I only have two comments, and these are more a matter of emphasis than anything else. 1. Near the beginning of the article, Yarkoni writes of two ways of drawing scientific conclusions from statistical evidence: > The “fast” approach is liberal and incautious; it makes the > default assumption that every observation can be safely generalized > to other similar-seeming situations until such time as those > generalizations are contradicted by new evidence. . . .>
> The “slow” approach is conservative, and adheres to the opposite > default: an observed relationship is assumed to hold only in > situations identical, or very similar to, the one in which it has > already been observed. . . . Yarkoni goes on to say that in modern psychology, it is standard to use the fast approach, that the fast approach gets attention and rewards, but that in general the fast approach is wrong, that instead we should be using the fast approach to generate conjectures but use the slow approach when trying to understand what we know. I agree, and I also agree with Yarkoni’s technical argument that the slow approach corresponds to a multilevel model in which there are varying intercepts and slopes corresponding to experimental conditions, populations, etc. That is, if we are fitting the model y = a + b*x + error to data (x_i, y_i), i=1,…,n, we should think of this entire experiment as study j, with the model y = a_j + b_j*x + error, and different a_j, b_j for each potential study. To put it another way, a_j and b_j can be considered as functions of the experimental conditions and the mix of people in the experiment. Or, to put it another way, we have an implicit multilevel model with predictors x at the individual level and other predictors at the group level that are implicit in the model for a, b. And we should be thinking about this multilevel model even when we only have data from a single experiment. This is all related to the argument I’ve been making for awhile about “transportability” in inference, which in turn is related to an argument that Rubin and others have been making for decades about thinking of meta-analysis in terms of response surfaces. To put it another way, all replications are conceptual replications.
So, yeah, these ideas have been around for awhile. On the other hand, as Yarkoni notes, standard practice is to not think about these issues at all and to just make absurdly general claims from absurdly specific experiments. Sometime it seems that the only thing that makes researchers aware of the “slow” approach is when someone fails to replicate one of their studies, at which point the authors suddenly remember all the conditions on generality that they somehow forgot to mention in their originally published work. (See here or an extreme case that really irritated me.) So Yarkoni’s paper could be serving a useful role even if all it did was remind us of the challenges of generalization. But the paper does more than that, in that it links this statistical idea with many different aspects of practice in psychology research. That all said, there’s one way in which I disagree with Yarkoni’s characterization of scientific inferences as “fast” or “slow.” I agree with him that the “fast” approach is mistaken. But I think that even his “slow” approach can be too strong! Here’s my concern. Yarkoni writes, “The ‘slow’ approach is conservative, and adheres to the opposite default: an observed relationship is assumed to hold only in situations identical, or very similar to, the one in which it has already been observed.” But my problem is that, in many cases, I don’t even think the observed relationship holds in the situations in which has beenobserved.
To put it more statistically: Claims in the sample do not necessarily generalize to the population. Or, to put it another way, correlation does not even imply correlation.
Here’s a simple example: I go the store, buy a die, I roll it 10 times and get 3 sixes, and I conclude that the probability of getting a six from this die is 0.3. That’s a bad inference! The result from 10 die rolls gives me just about no useful information about the probability of rolling a six. Here’s another example, just as bad but not so obviously bad: I find a survey of 3000 parents, and among those people, the rate of girl births was 8% higher among the most attractive parents than among the other parents. That’s a bad inference! The result from 3000 births gives me just about no useful information about the probability of agirl birth.
So, in those examples, even a “slow” inference (e.g., “This particular die is biased,” or “More attractive parents from the United States in this particular year are more likely to have girls”) is incorrect. This point doesn’t invalidate any of Yarkoni’s article; I’m just bringing it up because I’ve sometimes seen a tendency in open-science discourse for people to give too much of the benefit of the doubt to bad science. I remember this with that ESP paper from 2011: people would say that this paper wasn’t so bad, it just demonstrated general problems in science. Or they’d accept that the experiments in the paper offered strong evidence for ESP, it was just that the evidence overwhelmed their prior. But no, the ESP paper was bad science, and it didn’t offer strong evidence. (Yes, that’s just my opinion. You can have your own opinion, and I think it’s fine if people want to argue (mistakenly, in my view) that the ESP studies are high-quality science. My point is that if you want to argue that, argue it, but don’t take that position by default.) That was my point when I argued against over-politeness in scientific discourse. The point is not to be rude to _people_. We can be as polite as we want to individual people. The point is that there are costs, serious costs, to being overly polite to _scientific claims_. Every time you “bend over backward” to give the benefit of the doubt to scientific claim A, you’re rigging things _against_ the claim not-A. And, in doing so, you could be doing your part to lead science astray (if the claims A and not-A are of scientific importance) or to hurt people (if the claims A and not-A have applied impact). And by “hurt people,” I’m not talking about authors of published papers, or even of hardworking researchers who _didn’t_ get papers published because they couldn’t compete with the fluff that gets published by PNAS etc., I’m talking about the potential consumers of this research. Here I’m echoing the points made by Alexey Guzey in his recent post on sleep research. I do _not_ believe in giving a claim the benefit of the doubt, just cos it’s published in a big-name journal or by abig-name professor.
In retrospect, instead of saying “Against politeness,” I should’ve said “Against deference.” Anyway, I don’t think Yarkoni’s article is too deferential to dodgy published claims. I just wanted to emphasize that even his proposed “slow” approach to inference can let a bunch of iffyclaims sneak in.
Later on, Yarkoni writes: > Researchers must be willing to look critically at previous studies > and flatly reject—on logical and statistical, rather than > empirical, grounds—assertions that were never supported by the > data in the first place, even under the most charitable > methodological assumptions. I agree. Or, to put it slightly more carefully, we don’t have to reject the scientific claim; rather, we have to reject the claim that the experimental data at hand provide strong evidence for the attached scientific claim (rather than merely evidence consistent with the claim). Recall the distinction between truth and evidence. Yarkoni also writes: > The mere fact that a previous study has had a large influence on the > literature is not a sufficient reason to expend additional resources > on replication. On the contrary, the recent movement to replicate > influential studies using more robust methods risks making the > situation worse, because in cases where such efforts superficially > “succeed” (in the sense that they obtain a statistical result > congruent with the original), researchers then often draw the > incorrect conclusion that the new data corroborate the original > claim . . . when in fact the original claim was never supported by > the data in the first place. I agree. This is the sort of impoliteness, or lack of deference, that I think is valuable going forward. Or, conversely, if we want to be polite and deferential to embodied cognition and himmicanes and air rage and ESP and ages ending in 9 and the critical positivity ratio and all the rest . . . then let’s be just as polite and deferential to all the zillions of unpublished preprints, all the papers that _didn’t_ get into JPSP and Psychological Science and PNAS, etc. Vaccine denial, N rays, spoon bending, whatever. The whole deal. But that way lies madness. Let me again yield the floor to Yarkoni: > There is an unfortunate cultural norm within psychology (and, to be > fair, many other fields) to demand that every research contribution > end on a wholly positive or “constructive” note. This is an > indefensible expectation that I won’t bother to indulge. Thank you. I thank Yarkoni for his directness, as earlier I’ve thanked Alexey Guzey,
Carol Nickerson
,
and others
for expressing negative attitudes that are sometimes socially shunned. 2. I recommend that Yarkoni avoid the use of the terms fixed and random effects as this could confuse people. He uses “fixed” to imply non-varying, which makes a lot of sense, but in economics they use “fixed” to imply unmodeled. In the notation of this 2005 post,
he’s using definition 1, and economists are using definition 5. The funny thing is that everyone who uses these terms thinks they’re being clear. But the terms have different meanings for different people. Later on page 7 Yarkoni alludes to definitions 2 and 3. The whole fixed and random thing is a mess.CONCLUSION
Let me conclude with the list of recommendations with which Yarkoniconcludes:
> Draw more conservative inferences>
> Take descriptive research more seriously>
> Fit more expansive statistical models>
> Design with variation in mind>
> Emphasize variance estimates>
> Make riskier predictions>
> Focus on practical predictive utility I agree. These issues come up not just in psychology but also in political science, pharmacology,
and I’m sure lots of other fields as well. AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis22 Filed under Decision Theory,
Miscellaneous Statistics.
43 Comments
BDA FREE (BAYESIAN DATA ANALYSIS NOW AVAILABLE ONLINE AS PDF)Posted by Andrew
on 6 April
2020, 10:34 am
Our book, Bayesian Data Analysis, is now available for download for non-commercial purposes! You can find the link here , along with lots morestuff, including:
• Aki Vehtari’s course material, including video lectures, slides, and his notes for most of the chapters • 77 best lines from my course• Data and code
• Solutions to some of the exercises We started writing this book in 1991, the first edition came out in 1995, now we’re on the third edition . . . it’s been a long time. If you want the hard copy (which I still prefer, as I can flip through it without disturbing whatever is on my screen), you can still buy it at a reasonable price. AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis821 Filed under Bayesian Statistics,
Teaching .
11 Comments
PANDEMIC CATS FOLLOWING SOCIAL DISTANCINGPosted by Andrew
on 6 April
2020, 9:16 am
Who ever said that _every_ post had to do with statistical modeling, causal inference or social science? (Above photo sent in by Zad.) AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis9 Filed under Public Health. 39
Comments
CAREER ADVICE FOR A FUTURE STATISTICIANPosted by Andrew
on 5 April
2020, 9:40 am
Gary Ruiz writes:
> I am a first-year math major at the Los Angeles City College in > California, and my long-term educational plans involve acquiring at > least one graduate degree in applied math or statistics.>
> I’m writing to ask whether you would offer any career advice to > someone interested in future professional work in statistics.>
> I would mainly like to know:>
> – What sort of skills does this subject demand and reward, more > specifically than the requisite/general mathematical abilities?>
> – What are some challenges someone is likely to face that are > unique to studying statistics? Any quirks to the profession at a > higher (mainly at the research) level?>
> – How does statistics contrast with related majors like Applied > Mathematics in terms of the requisite training or later subjects of> study?
>
> – Are there any big (or at least common) misconceptions regarding > what statistical research work involves?>
> – What are some of the other non-academic considerations I might > want to keep in mind? For example, what are other statisticians > usually like (if there’s a “general type”)? How does being a > statistician affect your day-to-day life (in terms of the time > investment, etc.), if at all?>
> – If you could give your younger self any career-related advice, > what would it be? (I hope this question isn’t too cliche, but I > figured it was worth asking).>
> – Finally, what are the most important factors that any potential > statistician should consider before committing to the field?My replies:
– Programming is at least as important as math. Beyond that, you could get a sense of what skills could be useful by looking at our forthcoming book, Regression and Other Stories, or by working through the Stan case studies. – I don’t know that there are any challenges that are unique to studying statistics. Compared to other academic professions, I think statistics is less competitive, maybe because there are so many alternatives to academia involving work in government and industry. – I don’t know enough about undergraduate programs to compare statistics to applied math. My general impression is that the twofields are similar.
– I don’t know of any major misconceptions regarding statistical research work. The only thing I can think of offhand is that in our PhD students we sometimes get pure math students who want to go into finance, I think in part because they think this will be a way for them to keep doing math. But then when they get jobs in finance, they find themselves running logistic regressions all day. So it might’ve been more useful for them to have studied applied statistics rather than learning proofs of the Strong Law of Large Numbers. But this won’t arise at the undergraduate level. I’m pretty sure that any math you learn as an undergrad will come in handy later. – Regarding non-academic considerations: how your day-to-day life goes depends on the job. I’ve found lawyers and journalists to be on irregular schedules: either they’re in an immense hurry and are bugging me at all hours, or they’re on another assignment and they don’t bother responding to inquiries. Statistics is a form of engineering, and I think the job is more time-averaged. Even when there’s urgency (for example, when responding to a lawyer or journalist), everything takes a few hours. It’s typically impossible to do a rush job—and, even if you could, you’re better off checking your answer a few times to make sure you know what you’re doing. You’ll be making lots of mistakes in your career anyway, so it’s best to avoid putting yourself in a situation where you’re almost sure to mess up. – Career advice to my younger self? I don’t know that this is so relevant, given how times have changed so much in the past 40 years. My advice is when choosing what to do, look at older people who are similar to you in some way and have made different choices. One reason I decided to go into research, many years ago, was that the older people I observed who were doing research seemed happy in their jobs—even the ones who were doing boring research seemed to like it—while the ones doing other sorts of jobs, even those that might sound fun or glamorous, seemed more likely to have burned out. Looking back over the years, I’ve had some pretty good ideas that might’ve made me a ton of money, but I’ve been fortunate enough to be paid enough to have no qualms about giving these ideas away for free. – What factors should be considered by a potential statistician? I dunno, maybe think hard about what applications you’d like to work on. Typically you’ll have one or maybe two applications you’re an expert on. So choose something that seems interesting or important toyou.
AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis21 Filed under Teaching. 52
Comments
INTERESTING Y-AXIS
Posted by Andrew
on 4 April
2020, 10:38 pm
Merlin sent along this one: P.S. To be fair, when it comes to innumeracy, whoever designed the above graph has nothing on these people.
As Clarissa Jan-Lim put it: > Math is hard and everyone needs to relax! (Also, Mr. Bloomberg, sir, > I think we will all still take $1.53 if you’re offering). AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis22 Filed under Statistical graphics.
41 Comments
MODEL BUILDING IS LEGO, NOT PLAYMOBIL. (TOWARD UNDERSTANDING STATISTICAL WORKFLOW)Posted by Andrew
on 4 April
2020, 9:46 am
John Seabrook writes:
> Socrates . . . called writing “visible speech” . . . A more > contemporary definition, developed by the linguist Linda Flower and > the psychologist John Hayes, is “cognitive rhetoric”—thinking> in words.
>
> In 1981, Flower and Hayes devised a theoretical model for the brain > as it is engaged in writing, which they called the cognitive-process > theory. It has endured as the paradigm of literary composition for > almost forty years. The previous, “stage model” theory had > posited that there were three distinct stages involved in > writing—planning, composing, and revising—and that a writer > moved through each in order. To test that theory, the researchers > asked people to speak aloud any stray thoughts that popped into > their heads while they were in the composing phase, and recorded the > hilariously chaotic results. They concluded that, far from being a > stately progression through distinct stages, writing is a much > messier situation, in which all three stages interact with one > another simultaneously, loosely overseen by a mental entity that > Flower and Hayes called “the monitor.” Insights derived from the > work of composing continually undermine assumptions made in the > planning part, requiring more research; the monitor is a kind of > triage doctor in an emergency room. This all makes sense to me. It reminds me of something I tell my students, which is that “writing is non-algorithmic,” which isn’t literally true—everything is algorithmic, if you define “algorithm” broadly enough—but which is intended to capture the idea that when writing, we go back and forth between structure anddetail.
Writing is _not_ simply three sequential steps of planning, composing, and revising, but I still think that it’s useful when writing to consider these steps, and to think of Planning/Composing/Revising as a template. You don’t have to literally start with a plan—your starting point could be composing (writing a few words, or a few sentences, or a few paragraphs) or revising (working off something written by someone else, or something written earlier by you)—but at some point near the beginning of the project, an outline can be helpful. Plan with composition in mind, and then, when it’s time to compose, compose being mindful of your plan and also of your future revision process. (To understand the past, we must first know thefuture.)
But what I really wanted to talk about today is statistical analysis, not writing. My colleagues and I have been thinking a lot about workflow. On the first page of BDA, we discuss these three steps:1. Model building.
2. Model fitting.
3. Model checking.
And then you go back to step 1. That’s all fine, it’s a starting point for workflow, but it’s not the whole story. As we’ve discussed here and elsewhere, we don’t just fit a single model: workflow is about fitting multiple models. So there’s a lot more to workflow; it includes model building, model fitting, and model checking as dynamic processes where each model is aware of others. Here are some ways this happens: – We don’t just build one model, we build a sequence of models. This fits into the way that statistical modeling is a language with a generative grammar. To use toy terminology, model building is Lego,not Playmobil.
– When fitting a model, it can be helpful to use fits from other models as scaffolding. The simplest idea here is “warm start”: take the solution from a simple model as a starting point for new computation. More generally, we can use ideas such as importance sampling, probabilistic approximation, variational inference, expectation propagation, etc., to leverage solutions from simple models to help compute for more complicated models. – Model checking is, again, relative to other models that interest us. Sometimes we talk about comparing model fit to raw data, but in many settings any “raw data” we see have already been mediated by some computation or model. So, more generally, we check models by comparing them to inferences from other, typically simpler, models. Another key part of statistical workflow is _model understanding_, also called interpretable AI. Again, we can often best understand a fitted model by seeing its similarities and differences as compared toother models.
Putting this together, we can think of a sequence of models going from simple to complex—or maybe a network of models—and then the steps of model building, inference, and evaluation can be performed on thisnetwork.
This has come up before—here’s a post with some links, including one that goes back to 2011—so the challenge here is to actually do something already! Our current plan is to work through workflow in some specific examples and some narrow classes of models and then use that as a springboard toward more general workflow ideas. P.S. Thanks to Zad Chow for the adorable picture of workflow shownabove.
AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis8 Filed under Bayesian Statistics,
Literature
,
Miscellaneous Statistics.
12 Comments
UPDATE: OHDSI COVID-19 STUDY-A-THON. Posted by Keith O’Rourkeon 3 April
2020, 5:30 pm
Thought a summary in the read below section might be helpful as the main page might be a lot todigest.
The OHDSI Covid 19 group re-convenes at 6:00 (EST I think) Monday forupdates.
For those who want to do modelling, you cannot get the data but must write analysis scripts that data holders will run on their computers and return results. My guess is that might be most doable through herewhere custom R
scripts can be implemented that data holders might be able to run. Maybe some RStan experts can try to work this through. Continue reading ‘Update: OHDSI COVID-19 study-a-thon.’ » AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis3 Filed under Public Health.
Comment
NOISE-MINING AS STANDARD PRACTICE IN SOCIAL SCIENCEPosted by Andrew
on 3 April
2020, 9:04 am
_The following example is interesting, not because it is particularly noteworthy but rather because it represents business as usual in much of social science: researchers trying their best, but hopelessly foiled by their use of crude psychological theories and cruder statistics, along with patterns of publication and publicity that motivate the selection and interpretation of patterns in noise._ Elio Campitelli writes: > The silliest study this week?>
> I realise that it’s a hard competition, but this has to be the > silliest study I’ve read this week. Each group of participants > read the same exact text with only one word changed and the > researchers are “startled” to see that such a minuscule change > did not alter the readers’ understanding of the story. From the > Guardian article>
> (the paper is yet to be published as I’m sending you this email):>
>> Two years ago, Washington and Lee University professors Chris >> Gavaler and Dan Johnson published a paper in which they revealed >> that when readers were given a sci-fi story peopled by aliens and >> androids and set on a space ship, as opposed to a similar one set >> in reality, “the science fiction setting triggered poorer >> overall reading” and appeared to “predispose readers to a less >> effortful and comprehending mode of reading – or what we might >> term non-literary reading”.>>
>> But after critics suggested that merely changing elements of a >> mainstream story into sci-fi tropes did not make for a quality >> story, Gavaler and Johnson decided to revisit the research. This >> time, 204 participants were given one of two stories to read: both >> were called “Ada” and were identical apart from one word, to >> provide the strictest possible control. The “literary” version >> begins: “My daughter is standing behind the bar, polishing a >> wine glass against a white cloth.” The science-fiction variant >> begins: “My robot is standing behind the bar, polishing a wine >> glass against a white cloth.”>>
>> In what Gavaler and Johnson call “a significant departure” >> from their previous study, readers of both texts scored the same >> in comprehension, “both accumulatively and when divided into the >> comprehension subcategories of mind, world, and plot”.>>
>> The presence of the word “robot” did not reduce merit >> evaluation, effort reporting, or objective comprehension scores, >> they write; in their previous study, these had been reduced by the >> sci-fi setting. “This difference between studies is presumably a >> result of differences between our two science-fiction texts,”>> they say.
>>
>> Gavaler said he was “pretty startled” by the result.>
> I mean, I wouldn’t dismiss out of hand the possibility of a > one-word change having dramatic consequences (change > “republican” to “democrat” in a paragraph describing a > proposed policy, for example). But in this case it seems to me that > the authors surfed the noise generated by the previous study into > expecting a big change by just changing “sister” to “robot” > and nothing else. I agree. Two things seem to be going on: 1. The researchers seem to have completely internalized the biases arising from the statistical significance filter that lead to estimates being too high (as discussed in section 2.1 of this article),
thus they came into this new experiment expecting to see a huge and statistically significant effect (recall the 80% power lie).
2. Then they do the experiment and are gobsmacked to find nothing (like the 50 shades of gray story, but without the self-awareness). The funny thing is that items 1 and 2 kinda cancel, and the researchers still end up with positive press! P.S. I looked up Chris Gavalar and he has a lot of interestingthoughts. Check out
his blog! I feel
bad that he got trapped in the vortex of bad statistics, and I don’t want this discussion of statistical fallacies to reflect negatively on his qualitative work. AddThis Sharing Buttons Share to FacebookFacebookShare to TwitterTwitterShare to PrintPrintShare to EmailEmailShare to MoreAddThis5 Filed under Sociology, Zombies
. 14
Comments
CONFERENCE ON MISTER P ONLINE TOMORROW AND SATURDAY, 3-4 APR 2020Posted by Andrew
on 2 April
2020, 1:15 pm
We have a conference on multilevel regression and poststratification (MRP) this Friday and Saturday, organized by Lauren Kennedy, Yajuan Si, and me. The conference was originally scheduled to be at Columbia but now it is online. Here is the information. If you want to join the conference, you must register for it ahead of time; just click on the link . Here are the scheduled talks for tomorrow (Fri): > ELIZABETH TIPTON RCT Designs for Causal Generalization>
> BENJAMIN SKINNER Why did you go? Using multilevel regression with > poststratification to understand why community colleges students> exit early
>
> JON ZELNER From person-to-person transmission events to > population-level risks: MRP as a tool for maximizing the public > health benefit of infectious disease data>
> KATHERINE LI Multilevel Regression and Poststratification with > Unknown Population Distributions of Poststratifiers>
> QIXUAN CHEN Use of administrative records to improve survey > inference: a response propensity prediction approach>
> LAUREN KENNEDY AND ANDREW GELMAN 10 things to love and hate about> MRP
And here’s the schedule for Saturday: > SHIRO KURIWAKI AND SOICHIRO YAMAUCHI>
> ROBERTO CERINA Election projections using available data, machine > learning, and poststratification>
> DOUGLAS RIVERS Modeling elections with multiple candidates>
> YAJUAN SI Statistical Data Integration and Inference with Multilevel > Regression and Poststratification>
> YUTAO LIU Model-based prediction using auxiliary information>
> SAMANTHA SEKAR
>
> CHRIS HANRETTY Hierarchical related regression for individual and > aggregate electoral data>
> LUCAS LEEMANN Improved Multilevel Regression with > Post-Stratification Through Machine Learning (autoMrP)>
> LEONTINE ALKEMA Got data? Quantifying the contribution of > population-period-specific information to model-based estimates in > demography and global health>
> JONATHAN GELLAR Are SMS (text message) surveys a viable form of data > collection in Africa and Asia?>
> CHARLES MARGOSSIAN Laplace approximation for speeding computation of > multilevel models>
> AddThis Sharing Buttons > Share to FacebookFacebookShare to TwitterTwitterShare to > PrintPrintShare to EmailEmailShare to MoreAddThis28 Filed under Bayesian Statistics,
Multilevel Modeling
,
Political Science
,
Public Health
, Stan
, Statistical
computing
.
16 Comments
Older Entries
*
Search for:
*
RECENT COMMENTS
* Anoneuoid on “America is used to blaming individuals for systemic problems. Let’s try to avoid that this time.” * Terry on “America is used to blaming individuals for systemic problems. Let’s try to avoid that this time.” * Keith O’Rourke on “America is used to blaming individuals for systemic problems. Let’s try to avoid that this time.” * Andrew on “America is used to blaming individuals for systemic problems. Let’s try to avoid that this time.” * Anoneuoid on “America is used to blaming individuals for systemic problems. Let’s try to avoid that this time.” * Daniel Lakeland on “America is used to blaming individuals for systemic problems. Let’s try to avoid that this time.” * jim on Considerate Swedes only die during the week. * jim on Considerate Swedes only die during the week. * Michael Nelson on “America is used to blaming individuals for systemic problems. Let’s try to avoid that this time.” * jim on Considerate Swedes only die during the week. * Swede on Considerate Swedes only die during the week. * jim on Considerate Swedes only die during the week. * Phil on Considerate Swedes only die during the week. * Anoneuoid on “America is used to blaming individuals for systemic problems. Let’s try to avoid that this time.” * Paul Hayes on Given that 30% of Americans believe in astrology, it’s no surprise that some nontrivial percentage of influential American psychology professors are going to have the sort of attitude toward scientific theory and evidence that would lead them to have strong belief in weak theories supported by no good evidence. * Andrew on “America is used to blaming individuals for systemic problems. Let’s try to avoid that this time.” * Brent Hutto on Considerate Swedes only die during the week. * Terry on “America is used to blaming individuals for systemic problems. Let’s try to avoid that this time.” * Dzhaughn on Considerate Swedes only die during the week. * Dzhaughn on The checklist manifesto and beyond*
CATEGORIES
* Administrative
* Art
* Bayesian Statistics* Causal Inference
* Decision Theory
* Economics
* Literature
* Miscellaneous Science * Miscellaneous Statistics * Multilevel Modeling* Political Science
* Public Health
* Sociology
* Sports
* Stan
* Statistical computing * Statistical graphics* Teaching
* Zombies
Powered by WordPress . Theme F2.
Details
Copyright © 2024 ArchiveBay.com. All rights reserved. Terms of Use | Privacy Policy | DMCA | 2021 | Feedback | Advertising | RSS 2.0