The test of all beliefs is their practical effect in life. It be true that optimism compels the world forward, and pessimism retards it, them it is dangerous to propagate a pessimistic philosophy. One who believes that the pain in the world outweighs the joy, and expresses that unhappy conviction, only adds to the pain. … Life is a fair field, and the right will prosper if we stand by our guns.
Let pessimism once take hold of the mind, and life is all topsy-turvy, all vanity and vexation of spirit. … If I regarded my life from the point of view of the pessimist, I should be undone. I should seek in vain for the light that does not visit my eyes and the music that does not ring in my ears. I should beg night and day and never be satisfied. I should sit apart in awful solitude, a prey to fear and despair. But since I consider it a duty to myself and to others to be happy, I escape a misery worse than any physical deprivation.
Think about property rights.
Take this computer, for example. I’m assuming it’s yours. You own this computer. It would be wrong for someone to take it from you without your consent. You have the right to do whatever you want with it—destroy it, even.
Now think about your car. You own that car. It’s yours to keep until you don’t want it anymore, and nobody can take it from you without your permission. Same goes with your house, your dog, your lawnmower, your personal library.
But think too deeply about property rights, and you’ll realize things aren’t as clear cut as I’ve just made them out to be. For example, I’ve listed only material goods thus far. It’s easy to imagine when rights to ownership in material property is violated. But what about non-material property, like trademarks and slogans?
Coca-cola, for example, owns the word “Coca-Cola.” It’s a different type of ownership, though, because others are allowed to use the word as often as they want. I, for one, say it all the time. I’m even writing it here. COCA-COLA.
Have I violated Coca-Cola’s property rights? I don’t think so, but I’m actually not sure. If I have, who exactly have I harmed? To whom does the word “Coca-Cola” belong? John Pemberton came up with the name sometime in the late 19th century, but he’s not alive anymore. Technically the word belongs to The Coca-Cola Company in Atlanta, GA, so I’ve violated the company’s rights to the word. But people at this company come and go. It’s probably the case that no one works there today who worked there 50 years ago. The company, then, exists independent of the people who work there, which means the rights belong to an inanimate, non-living thing. What other property rights can inanimate, non-living things hold?
Another area where property rights get confusing is with regard to how ownership begins. When John Cabot claimed all of North America for England in 1496, did all of North American belong to England? I think most would say no. But what if he laid claim to a small, uninhabited island off the coast of Maine—would that be ok? I think most would say yes, as long as the island had truly never been inhabited before. First come, first serve, right?
There’s another gray area—first come, first serve. Imagine a pile of cash on the sidewalk left by someone as a gift to passersby. Say there’s a sign on the pile indicating that the gift belongs to whoever finds it and wants it. Both my neighbor finds the pile and calls me to come look at it. I grab it and take it for myself. Is it “rightfully” mine, as in no one is allowed to take it from me? He saw it first, but I grabbed it first. Who has the better claim, if any claim at all? Perhaps the city, who owns the sidewalk on which the cash was placed.
These are hypothetical situations, of course, but analogous situations are happening all the time in the realm of intellectual and digital property. These are serious issues with real-life implications.
My point here is to show that property rights aren’t very black-and-white, and that we should be careful when tinkering with them. Rights to intellectual property, digital property, privacy, and even material goods are fragile things. I don’t think anyone knows exactly how to divvy these things up. So take care when talking and thinking about property, and do what you can to further this dialogue in a helpful way—a way that recognizes both the undeniable importance of property rights to sustaining a rational market order and the gray areas inherent in the very notion of property.
In the U.S., climate change somehow has become a litmus test that identifies you as belonging to one or the other of these two antagonistic tribes. When we argue about it, Kahan says, we’re actually arguing about who we are, what our crowd is. We’re thinking, People like us believe this. People like that do not believe this. For a hierarchical individualist, Kahan says, it’s not irrational to reject established climate science: Accepting it wouldn’t change the world, but it might get him thrown out of his tribe.–Joel Achenbach, “Why Do Many Reasonable People Doubt Science?”
If you’re like most people, you’re hearing a lot about net neutrality but know almost nothing about it. It has something to do with the internet. President Obama likes it. Republicans don’t seem to get it, but they don’t like that Obama likes it.
That was me a few weeks ago. Then I read up on net neutrality, and now I’ve written a piece at Enhancing Capital that (I hope) familiarizes people with net neutrality and helps put the issue in some context. I also offer some advice for investors who might have positions in some stocks whose performance could be affected—namely, that both ISPs (like Comcast and Verizon) and large web content owners (like Netflix and Google) could see some volatility in the coming weeks and months, as net neutrality is approved by the FCC (which is quite likely), is fought by Republicans in Congress (also likely), and reveals its true nature (whether it will be a game-changer for the internet as we know it).
I’m not a big fan of net neutrality. I understand its proponents’ arguments and their fear that big, powerful ISPs “colluding” with big, powerful content owners could stifle competition. But I don’t like government meddling in markets, and frankly it only makes sense to me that barriers to entry become higher as an industry gets older. Trying to prevent this from happening will, I think, have some unfortunate consequences.
I took statistics in the twelfth grade. I didn’t do well, but that because I wasn’t trying. I do remember learning about p-values, though, and using them do hypothesis testing.
Then in college, I took business statistics. We learned all about p-values and used them in class probably every day. Two years later, I took econometrics, which basically assumed familiarity with what a p-value is and how to use it to do hypothesis testing.
After all that, I can’t really say I ever understood p-values. I used them by remembering the steps my teachers took to do hypothesis testing, but I didn’t know why I was using them, and I certainly didn’t know how to use them beyond the confines of a very specific, black-and-white problem posed in a textbook somewhere.
But tonight, all that changed. I finally understand what a p-value is, now that I’m a quarter of the way through a master’s in economics at George Mason University. And now that I get it, I’m going to explain it to you in a very simple and straightforward way—that is, the one way it was never explained to me by any professor or textbook through three statistics classes.
(I realize this is a divergence from my usual economic/political/financial commentary on this blog, but I’m hoping it will help at least a few people avoid the frustration I had trying to learn this from dry, hard-to-follow textbooks.)
What is a p-value?
Suppose you want to find the average wage of the American adult population. First, you make a guess based on information from the Bureau of Economic Analysis that that this average wage equals $15 per hour. Armed with this hypothesis, you then ask 1,000 people (your sample) how much they make per hour. You add up their answers, divide by 1,000, and come up with the mean: say, $17. Because we’re doing statistics and want to sound like a statistician, we’ll call this mean the expected value of your sample.
So the expected value of your sample’s hourly wage is $17, which is $2 more than your original hypothesis. In other words, your hypothesis is $2 less than what you observed in your survey.
This makes you worry that your sample must not accurately reflect the entire population. A $2 difference is pretty big, relative to the numbers we’re working with. But on the other hand, maybe it means you’ve discovered that this hypothesis, based on your prior studies, is wrong, and that your information needs to be seen by economists at the Bureau of Economic Analysis because it might mean their number is wrong.
But how can you know which is true? Is your sample bad, or is the Bureau or Economic Analysis wrong?
There are several things you must do in order to answer that question, but one of the first steps involves calculating a p-value.
What is a p-value? Well, lets imagine you could draw another sample—that is, ask another 1,000 people about their wage. You don’t actually do it, but it’s a hypothetical possibility. After you gather the answers, you’d then calculate the average wage associated with that sample, which (again) you refer to as that second sample’s expected value. The probability of this second sample’s expected value being as far away or further from your hypothesis of $15 as your first sample’s expected value of $17 (that is, a $2 difference) is the p-value.
I repeat in more general terms:
- The p-value is the probability that drawing another sample and calculating it’s expected value will yield a number that is as different from (or more different than) the first sample’s expected value was from the original hypothesis.
So referring back to our example: Imagine that it’s very likely that another new hypothetical sample will yield an expected value at least $2 away from $15—say, $11 or $20 or $22…anything more than $2 away from $15, because that was the expected value we actually calculated from the real, non-hypothetical sample we drew at the beginning. This means that the p-value is very high.
Again, if it’s true that taking another sample would result in an expected value that is as different from (or more different than) your hypothesis, your p-value is high. The exact probability of this happening is the p-value. So if it will happen 90 percent of the time, the p-value is 90%.
If it’s unlikely that another sample will result in an expected value that is more than $2 different from your hypothesis of $15, then the p-value is low. If only one out of one hundred samples will yield an expected value more than $2 different from $15, then the p-value is 1%.
Now that you’ve got that down, understanding the following technical definition of a p-value should be easier.
- The p-value, sometimes called the “probability value,” is the probability of drawing a statistic from another hypothetical sample that is at least as adverse to the hypothesis as the one you actually computed in your real, non-hypothetical sample.
Here’s another definition. Sometimes hearing things a few different ways helps.
- The p-value is the probability of observing an expected value of a sample that is at least as different from the hypothesis as the sample whose quality is the subject of your investigation.
Now here is the technical definition your teacher or professor probably wants to hear from you.
- The p-value is the probability of observing, by pure random sampling variation, a sample expected value at least as different from the null hypothesis value as the observed sample’s expected value, assuming that the null hypothesis is true.
I hope that makes sense. I hope it also makes sense that a high p-value means you are less likely to reject your original hypothesis than a low p-value. In our example from above, a high p-value means it’s like that further samples will yield numbers higher than $17 or lower than $13 (both $2 away from $15). This means there is probably a considerable degree of random sampling variation and that our sample, while yielding an expected value $2 different than the hypothesis, really shouldn’t be thrown out.
On the other hand, a low p-value—say, 5%—means that the sample you drew is about as far as you’ll get from your hypothesis while staying within the realm of random sampling variation. The lower it gets, the more likely that your original hypothesis is wrong, as this only means the odds of you having drawn that sample get smaller and smaller. If it’s 0.5%, then it’s very likely that the difference between your hypothesis and your sample’s expected value is attributable to something other than random chance (i.e. that your hypothesis is wrong).
That’s p-value in a nutshell. I didn’t get into how to calculate a p-value, but that’s a topic for another day. I just wanted to fill what I perceive as a gap in the way p-value (and statistics at large, for that matter) is taught. Anyways, you need to understand this before you begin studying how to calculate p-values. In fact, understanding exactly what a p-value is will even help you remember how to calculate them come exam day.
Christian Christensen with a superb, critical analysis of just what we get wrong by lending Brian Williams’ apology any ear—namely, that we make a bigger deal of his fake story and half-hearted apology than the true tragedy of a war that Williams and his colleagues cheer led from the beginning.
Given that Williams works for NBC, his participation in the construction of a piece of fiction during the US invasion and occupation of Iraq is apt. US network news, together with outlets such as CNN, aggressively cheer-led an invasion predicated on a massive falsehood: the Iraqi possession of WMD. What is jarring, however, is the fact that Williams’ sad attempt to inject himself into the fabric of the violence is getting more ink and airplay than the non-existence of WMD did back in the early-to-mid 2000s: a lie that provided the justification for a military action that has taken the lives of hundreds of thousands of Iraqi civilians.
From embedded journalists to ultra-militaristic news logos and music, US television news media were more than willing to throw gas on the invasion fire. “Experts” in the studio were invariably ex-generals looking to pad their pensions, while anti-war activists (who spoke for sizable portions of the US and UK populations back in 2003) were avoided like the plague. After all, what news organization wants to be tarred with the “peace” brush when flag-waiving jingoism sells so incredibly well? The one-sidedness of coverage, particularly in the US, bordered on the morally criminal.
I don’t know much about the ongoing net neutrality debate (which I gather is to end when the FCC passes new rules this month), but it appears to me that a major reason behind the FCC’s push for “net neutrality” is a general complaint that internet service providers (ISPs), which often face little competition in the regions where they operate, treat customers poorly and charge too much. That ISPs have “natural monopolies” that allow them to rake in profits without improving service to customers.
(For those who don’t know, “net neutrality” basically turns the internet into a public utility by regulating ISPs like providers of other utilities (like electricity, water, etc.)
But by what standard are we judging the way ISPs treat customers? Who is to say that they are making too much or offering too little? If we’re paying too much for internet service now, then what should we be paying? How are we to know a fair price for internet service without a market for internet service?
I understand that ISPs may have gained certain privileges in the past from the government that may have given them unfair advantages. But is the solution to end the market for internet service altogether?
This reminds me of one aspect of the socialist calculation debate, whereby Austrian economists (among others) revealed the self-destructive nature of socialism. One pillar of their argument (and I’m simplifying here) is that without a market to study and observe, central planners will not know what prices to mandate for what goods. The result will be the production of too much or too little of regulated goods–distortive resource misallocations that result in excess supply and/or demand.
Again, I don’t know much about net neutrality. Read my comments in light of better analyses, like this one featuring Tech Freedom president Berin Szoka.
“It just seems to me that maybe if you open up our doors in a fair way and unleashed the spirit of peoples’ hard work, Detroit could become in really short order, one of the great American cities again. Now it would look different, it wouldn’t be Polish…But it would be just as powerful, just as exciting, just as dynamic. And that’s what immigration does and to be fearful of this, it just seems bizarre to me.”-Jeb Bush
Democracy is not an intrinsic good, after all; if it were, democratic institutions could not have produced the Nazis. Rather, a functioning democracy comes only as the late issue of a decently morally competent and stable culture.-David Bentley Hart
Can a poor job market cause rising college attendance?
Of course. Especially when financial aid is easy to find. If someone can’t find a job but can easily finance an education that improves their value in the job market, they’ll be more likely (on the margin) to go back to school and put off their job search until later.
But to what extent does rising college attendance indicate a poor job market for young and young-ish people? For example, is the rise in college attendance over the past seven years mostly due to a poor job market for 18-30-year olds?
I think that could definitely be true, but the key word is “mostly.” We’d have to know what percentage of students wouldn’t have gone to school had they been able to more easily find work instead. Do these students account for more than 50 percent of recent increases in college attendance? I doubt it, but I really don’t know.
I do know, though, that the number of people attending college has steadily increased over time. And I have every reason to believe that a long-term upward trend in per capita GDP (which exists) should correlate with a long-term upward trend in the number of people attending college. I’ll call this the “natural” rise in college attendance over time.
The key to knowing whether a poor job market for young and young-ish people since the 2008 financial crisis is a big reason for increased college attendance is finding some way to subtract the portion of the increase caused by this “natural” upward trend from the total increase in attendance. The number we’re left with should theoretically equal the number of students who would rather be working than in school, but could not find work and so chose school instead.
But even that is pretty iffy, because whether one wants to be in school or not depends, in part, on the differences in the type of work available to degree- and non-degree holders, which in turn is partly determined by the health of the labor market. It’s all very confusing.
Regardless, this has implications for discussions about the true value of figures like the unemployment rate, which I’ve seen attacked lately. The unemployment rate doesn’t include people who’ve quit looking for work as “unemployed”, so the story goes, and thus is really just a big lie.
In more precise terms, it’s the drop in the labor force participation rate (LFPR) that is not well-reflected in the unemployment rate. In fact, dropping out of the labor force entirely (that is, unemployed people quitting their job searches) lowers the unemployment rate, ceteris paribus, by removing people from the equation altogether. For example, if the labor force includes five unemployed people and 105 people total, those five people dropping out of the labor force would leave 100 people employed and 100 in the labor force—a zero percent unemployment rate.
But a falling LFPR doesn’t happen only when people get so discouraged that they quit looking for work and “drop out of the labor force.” It can happen for several reasons, of which a “natural” rise in college attendance rates is one. I discuss other reasons, like an aging population and planned early retirement, here.
All this to say, things like rising college attendance and a falling LFPR cannot, in themselves, be considered a gauge for the health of the labor market. These metrics are influenced by a variety of factors, and we shouldn’t necessarily be suspicious when they move one way or another. I do think college attendance is higher and LFPR is lower than levels we’d see if we hadn’t had a recession, but knowing whether the portion of these figures’ movement directly attributable to a poor job market is significant is a matter of subtracting their “natural” movement from the total movement—a difficult task, indeed.