The crux of the Bulwark post was that limiting abortion to women who were raped raises a serious, and underappreciated problem: It puts the burden on women to prove they were raped to avail themselves of the exception, and police have not always been been accommodating of women who complain that they’ve been raped. Of course, there is the possibility that someone will falsely claim to be a rape victim to get an exemption from an abortion ban, leaving the question whether to favor false positives or negatives. This is more a political issue than anything else.
But buried within the post was this assertion.
According to U.S. Justice Department data crunched by the Race, Abuse and Incest National Network, more than two-thirds of all sexual assaults go unreported.
Two-thirds is certainly a lot, but there’s a definitional problem here. If they’re unreported, then they’re an unknown and can’t be quantified. On top of that, there are a slew of additional unknowns, such as whether claims of unreported sexual assaults are, indeed, sexual assaults. But since they’re unreported, who knows? The link to RAINN has been sliced and diced many times to point out its baselessness over the years, and yet here it is, still being regurgitated as if there were any validity.
National studies have consistently shown that the incidence of false reporting of sexual assault is extremely low, ranging from 2 percent to 10 percent, similar to that for other crimes. But the incidence of rape survivors not being believed or otherwise mistreated is a lot higher than that.
What national studies have consistently shown that? The link here goes to a bald assertion devoid of any studies, which is understandable since the study from which the “2 percent to 10 percent” claim derives neither says that nor has any legitimate basis. But it’s still being regurgitated, baselessness notwithstanding.
In an editorial in the New York Times about three bills now in Congress, all of which are good laws that are highly worthy of passage, this line is tossed in.
While two-thirds of people who smoke crack are white, 80 percent of people who have been convicted of crack offenses are Black.
The link for the “two-thirds” assertion, found at the website of advocacy organization Drug Policy Alliance, derives provides this basis.
Reform advocates say no other single federal policy is more responsible for gross racial disparities in the federal criminal justice system than the crack/power sentencing disparity. Even though two-thirds of crack cocaine users are white, more than 80 percent of those convicted in federal court for crack cocaine offenses are African American.
Again, it’s a bald assertion, and the line is largely a throwaway in the body of the editorial, but it regurgitates a claim for which no basis exists. Given that black people make up slightly more than 13% of the population, it would not be surprising if two-thirds of crack users are white, but there is no basis for the assertion, not that anyone seems to notice.
White and Hispanic participants spread their drug use relatively evenly across the powder,
crack, heroin, and combination heroin markets. Black participants primarily used crack.
In each of these examples, the underlying arguments are sound and the purposes for which they’re proffered have merit. Personally, I generally agree with them and support them. But if so, why then would I raise issues with these claims used to bolster views with which I agree? Because they’re false and baseless, even if they serve to support positions with which I agree. Lies for a good cause are still lies.
There is a question whether the people writing up these positions are aware that they are spewing lies. They may know better but do it anyway. They may not know better, as so many have internalized baseless claims that are repeated ad nauseam. But given that they have gone so far as to include links to support their assertions, and the links are worthless garbage that neither say what they’re purported to say nor prove any point, it’s impossible to accept the premise that the writers were unaware that their sources weren’t sources at all, but regurgitation of baseless claims upon which their baseless claims rested.
Presumably, their inclusion served to create the gloss of credibility under the assumption that the point were sufficiently tangential to the core arguments that readers wouldn’t bother to click on the links and check to see whether they actually provided a valid basis for the claims. After all, they included links which most readers will assume prove the point, so why bother to look for themselves? I mean, would the New York Times lie?
We are awash in false and baseless statistical claims. Many try to use studies and numbers to create the appearance of legitimacy, even though they’re statistically invalid. It could be bad questions in a survey. It could be an unrepresentative universe of sources. It could be, as far too often is the case, that unknowns provide a ripe opportunity to make stuff up in furtherance of the cause. Take a survey of women on campus as to who has been the victim of a sexual assault and you might get an extremely high percentage claiming they’re “survivors.” Are they, do they see an opportunity to air their grievance for the cause in solidarity with other women who had bad dates or post hoc regret?
And yet, here we are, with seemingly credible sources regurgitating incredible claims. If a lie gets repeated a million times, is it any less of a lie? But if a million people believe it to be true, is it then their “truth”? Are we better off believing in lies for good causes? Are we any better able to deal with real problems by believing in false assertions? But who wants to undermine an otherwise good goal by noting that some of the assertions proffered in its favor are baseless?