Thought: Welfare-Maximizing Speeding Fines

4/30/15

By Kevin DeLuca

ThoughtBurner

Opportunity Cost of Reading this ThoughtBurner post: $3.31 – about 15.1 minutes

Before you read this, make sure you’ve read my earlier post about speeding and the driver’s maximization problem. This is the second of a two-part ThoughtBurner series, and this post will look at the issue of speeding from the perspective of a city planner or benevolent government.

Imagine that you are working for a city government, and the local police chief wants to know how much the monetary penalty for speeding should be. The reason there are speed limits (allegedly) is so that people will not drive dangerously. Clearly, then, as a government official you would want people to follow the speed limits. This would probably be best overall, since it is bad for ‘society’ if a lot of people are driving dangerously.

But your purpose isn’t to completely stop people from speeding; it is simply to deter them from speeding. Or, in other words, you want to create incentives that will make them follow the speed limit more often. If you really wanted to make sure people didn’t speed ever, you could just make the fine for speeding ridiculously high – $100 million if you get caught. The expected costs would be way too high for any rational person to speed. But, clearly, taking every penny from someone who drives a little too fast is a net negative for society, so you want to avoid that scenario.

To make things even more confusing, your government actually makes money if it gives out more speeding tickets. Even though speed limits and speeding fines are meant to stop people from speeding, the government will lose revenue if that were to happen. The fines from speeding tickets can be spent on other useful things, for example fixing roads, and this benefits society as well.

So, it’s a tricky balance to find. People benefit from speeding, but speeding also imposes costs on society. The government benefits financially if more speeding tickets are given out, but if more people are speeding then roads are more dangerous. The government’s problem, and more specifically your problem as the city planner, is to figure out how expensive these speeding ticket fines ought to be given all the benefits and costs of speeding.

THE GOVERNMENT’S PROBLEM

Taking the perspective of the government, we can try to find the optimal level of speeding ticket fines. The benevolent government can be thought of as an optimizing agent itself, who wants to maximize the welfare of society.

In a simple model, there are three things that the government needs to take into consideration when deciding how expensive fines should be:

  • The speeding ticket revenue that can pay for other public goods
  • The cost of enforcing speed limits
  • The cost of accidents caused by speeding

The government can also control three things:

  • The actual speed limits (which affects the cost-benefit analysis of speeding individuals)
  • How expensive speeding tickets are (revenue for public goods)
  • The amount of enforcement (cost of enforcement)

Accident costs are outside of government control – they will affect the optimizing solution but the government can do nothing about them (except indirectly, as we will soon see).

For simplicity, I will make a few additional assumptions for easier calculations. First, even though the government has control over many factors, we will suppose that the government only focuses on choosing optimal fine levels. Notice that this is the easiest policy route to take – the process of changing the cost of a speeding ticket is basically costless itself, so any necessary adjustments would be relatively easy to make. Increasing enforcement is expensive, hard to do, and probably unpopular. Also, there has already been a lot of research done by road building people on how best to set speed limits[i], so I will assume that the government has already set speed limits at an optimal level.

By altering the fines, the government can affect drivers in two ways. The government can either stop people from speeding completely (turn more people into Punctual Perrys), or change the how speedy people speed (change the value of s*). These two strategies are illustrated below:

RaiseBase copy

 

RaiseSlope copy

The first figure happens when the government raises the base fine for speeding – in Travis County, this would mean raising the $105 base fine. The second figure happens if the government changes how quickly the fine increases (the derivative of fines with respect to speeding) – in Travis County, this would mean raising the $10 per 1 mph-over-the-limit penalty.

We will also assume that the government solves this problem on a year-to-year basis, so it considers costs and benefits over the course of a year (rather than on a single day or over an entire century). This is a realistic assumption because governments usually make budget decisions annually[ii].

Every year then, the government should try to solve what I call the government’s optimization problem. In words, the government’s objective is to maximize the benefits to society minus the costs to society, by choosing speeding fines. We can write this as:

Eq1

Don’t worry if it looks complicated. It’s not, it will all be ok! R(p,N,F) is just the amount of revenue brought in by speeding tickets. C(e) is just the fixed costs of enforcing some “e” amount. And the last term, A(r(F),N,c), represents the societal costs of accidents.

The revenue function is straightforward to calculate. It is just the number of tickets multiplied by the cost of a ticket. We can express the number of tickets each year as the probability of getting a ticket, p, multiplied by the number of people in the city, N. If we let F be the average cost of a speeding ticket, we have an expression for revenue:

Eq2

Since we are focusing on setting fines, we are also assuming that enforcement costs remain the same. The government doesn’t increase the number of police officers, or the amount of time they spend trying to catch people. They still have to pay all of the costs of enforcement, but they are fixed costs, which will not affect our optimizing solutions. This means that C(e) is fixed and we know that changing F will not affect these fixed costs.

Next, while the government cannot control how much an accident costs on average, it can reduce the total costs of accidents by decreasing how many people get into accidents. By increasing speeding fines, the government can deter people from speeding, which might lead to a reduction in accidents. In this sense, the accident rate can be thought of as a function of speeding fines; r(F).

Last, the societal cost of accidents is really just the number of accidents multiplied by the average cost of an accident. If we express the number of accidents as the accident rate, r(F), times the number of people, N, then we just need to multiply this by the average cost of an accident, which we will call c, and we’ll have our expression for the societal cost of accidents:

Eq3

Let us rewrite the government’s problem, substituting in the expressions for ticket revenue and accident costs:

Eq4

In order to solve this maximization problem, we just take the first derivative of the expression with respect to speeding fines, F, and set it equal to zero. The fixed costs of enforcement drop out since they are not functions of F, and we are left with:

Eq5

So obviously, as I’m sure all of you guessed from the beginning, p times N has to equal dr over dF times N times c. Duh, haha, who didn’t see that coming! Now we’ve solved the government’s maximization problem! [smirk emoji about here]

Our first order conditions actually describe something pretty simple: that the marginal increase in ticket revenue must equal the marginal increase in accident costs given some increase in speeding fines. Ok, so it’s not that simple. Let’s make it easier to deal with.

We already know how the constants p and c are defined, so the only confusing term left is dr over dF, which is how the accident rate changes when speeding fines change. It doesn’t have an intuitive interpretation at this point, because raising fines doesn’t actually change the accident rate directly. But, we do know that speed affects accident rates, especially speeding. Observe the magic of mathematics:

Eq6

So, the change in the accident rate with respect to fines (dr over dF) is really just a function of how the accident rate changes as speed changes (dr over ds), adjusted (divided) by how fines change as speed changes (dF over ds). This is much better for us (well, the government) because some smart people have already figured out a lot about how speed affects accident rates, which means that I (or government officials) have much less work to do.

If we plug this back into the equation above, cancel out the Ns, and solve for the change in traffic fines with respect to speed:

Eq7

Now, we have discovered the condition that needs to hold in order for the government to be acting optimally. It says that if the rate at which the fines increase is equal to the rate at which accidents increase multiplied by the cost of accidents, divided by the probability of getting caught, then the government is maximizing net social benefits. As long as this condition is met, the government will be solving its optimization problem.

Luckily for the government, they have direct control over all aspects of speeding fines, including their rate of change. For example, we knew from part 1 that Travis County had set the rate at which the fine increases as speed increases equal to a constant, $10 ($10 more fine for every 1 mph over the speed limit). Now the only question that remains is: how does the accident rate increase as speed increases?

According to an oldish traffic report done by David Solomon at the Department of Commerce[iii], the accident rate is better thought of as a function of how much you deviate from the average speed, not your actual speed. Figure 7 taken from the paper shows it well:

Screen Shot 2015-04-23 at 6.26.46 PM

This is essentially a graph of the shape of dr over ds. It shows how many accidents occurred at different deviations from the average speed (over a given distance, 100-million vehicle miles). The lowest rate seems to be at or slightly above the average speed, and the fitted line increases exponentially in both directions as the deviation from the average speed increases. Notice that the y-axis is in log scale, so the increases are even bigger than they appear visually. In the paper, they claim that this general pattern holds regardless of what the average speed actually is, though the end behavior changes a bit at really low or really high average speeds.

Intuitively, I think this makes sense – if you are going 60 mph in a 40 zone, I’d imagine you’d be way more likely to cause an accident than if you were going 60 mph in a 60 zone. What was initially surprising to me is how the rate of getting into an accident by driving too slow is actually just as high and sometimes higher than the rate of getting into an accident by driving to fast (more about this later).

If we assume that most people travel at a speed close to the speed limit when they drive, then we can use this information to assess the risk of speeding violations independent of the actual speed. Travis County’s penalties for speeding violations already sort of do this – you pay the same fine for going 5 mph over the speed limit regardless of the actual speed limit. This risk can then be used in our conditions above to calculate the optimal speeding fines.

So, in order for the government to set their speeding fines to the correct levels, they will need to make sure that the fines account for the fact that the accident rate does not increase constantly with speed. Rather than being a straight upward sloping line (as it is in Travis County), the optimal fine schedule should increase more as drivers’ speed-over-the-limit increases. It will look just like how the accident rate changes according to speed (multiplied by the constant c over p):

OptimalVsActual copy

With these sorts of fines, your fine would increase more as drivers sped more. 5 mph over the speed limit might be $155, but 10 mph over the limit wouldn’t be $205 – it would be much higher (maybe like $305). And then 20 mph would be waaaaaay higher, maybe like $700. This would accurately reflect the fact that as you deviate more from the average speed, the rate at which you get into accidents increases very quickly. The government, acting optimally and wanting to prevent accidents, should therefore also make the rate at which fines increase also increase very quickly. In this way, the government basically makes people more precisely “pay” for the increased danger to society that they create via their increased expected accident rate.

Notice that, in the theoretical fines in the above graph, the optimal fines are sometimes increasing faster and sometime increasing slower than the old, constantly increasing fines. This, in additional to the fact that we do not know the slope of driver’s utility curves U(s), means that we cannot know in advance how the new optimal fines will change the solutions to drivers’ optimization problems. It could cause some people to speed faster while others speed slower.

If we take the government’s optimization a step further, we can actually devise what I will call a “negative speed limit” that accounts for the increased risk of auto accidents at slower-than-average speeds. If the government is really all about optimizing, they should also penalize people who make roads more dangerous by driving extremely slowly – give out speeding tickets for slow speeds (slowing tickets?).

NegativeSLNoSafe copy

While it probably wouldn’t ever catch on politically, if the government justifies upper speed limits by claiming that it makes roads safer, then it’s no different to set a lower speed limit for the same reason. Since driving at an exact speed is probably too strict an enforcement rule, the government could set a window of safe driving speeds for each road, and then give out speeding tickets to people who drive at speeds outside the safe zone. For example, on the highway the rule could be something like: “Drive between 55 and 75 mph”, and then going too fast or too slow could result in a ticket. There could also be exceptions for slow/heavy traffic situations. In other situations, the government might just have an upper speed limit – “Drive between 5 and 25 mph” is effectively just an upper speed limit.

NegativeSL copy

I don’t really think that people or police would really be down for this though. I’m also suspicious that maybe what’s really happening is: people who drive quickly are more likely to get into accidents with people who are driving slowly. This would mean that really the danger is fast drivers, and the victims are disproportionately slow drivers, so it looks like driving slowly is dangerous (it is, but because people would be more likely to hit you, not because you “cause” more accidents).

Assuming that the government doesn’t want to optimize with a negative speed limit, we can still use our theoretical model to test whether the current speeding fines in Travis County are optimal or not, which is what I’ll do next.

EMPRICAL ESTIMATES: SPEEDING FINES IN TRAVIS COUNTY

When people decide whether to speed or not, it ultimately depends on their own preferences (whether they are Punctual Perrys or Lackadaisical Lucys). We now know that changing the base fine will turn some speeders into non-speeders by raising the expected costs of speeding at any speed. But, we can’t figure out the optimal base fine without knowing specifically how all people react to changes in the base fine, and how these changes affect ticket revenue and the costs of accidents.

Instead of speculating, I will simply say that, at this moment, I cannot assess whether Travis County’s base fine of $105 is set at optimal levels or not. But, based on the models that have been developed in these posts, if we assume that $105 is optimal (or even just that it is fixed) we can devise what government optimal speeding fines would look like.

As I showed in my last post, the probability of getting caught speeding is so low that people who are acting optimally and who decide to speed should almost completely ignore speed limits. The expected cost of speeding is so low that as long as they gain any value from speeding (well, more than $0.002 worth) they should increase their speed.

This, to me, suggests that the rate at which speeding fines are increasing – the additional $10 per 1 mph over that you pay if you get caught – is far too low if the actual intention is to stop people from driving dangerously (i.e. reduce the actual speed at which most people speed). But rather than wondering if that is true, we now know the conditions to check whether the government is acting optimally. Taking our government optimizing conditions, let’s just plug in the actual observed values into the expression:

Eq9

We know dF over ds is equal to 10 ($10 extra fine)[iv]; c is average accident costs which, according to this website[v], are at least $3,144; p is the probability of getting caught in a year, 0.206[vi]; and dr over ds is how the accident rate changes as speed changes. If you plug in everything except for dr over ds, you get:

Eq8

Since we know that dr over ds is not constant (it changes as speed changes), we already know that these are not perfectly optimal fines. But is this result at all close? Maybe they aren’t perfect, but instead the government just approximated in order to make the fines easier to understand. In that case, given the current fines of Travis County, the rate at which accidents increase would need to be close to about 0.0007 per mph faster a driver speeds.

I don’t have the actual data used in the Solomon study, so instead I’ll just use this cool ability I have where I point my face at the graphs and use these optic sensors in my head to send a signal to my brain which then comes up with numbers that I can use to calculate close approximations of actual data. The results in table form are shown below:

AccidentChart

On average, the number of accidents just about doubles (a little bit more than doubles, actually) for every 5 mph more a person is driving away from the average speed. More specifically, the number of accidents increased 108.54% on average for every 5 mph faster and 113.39% on average for every 5 mph slower (I excluded extremely slow deviations).

However, these are changes in the number of accidents over some given distance (100-million vehicle miles), not the change in number of accidents for some given number of drivers. Before we can compare these numbers, we have to get the unit of change to be accident rates per driver, per year (because our probability, p, is chance of a driver getting caught per year).

Luckily, we have the information to do this. We can turn all of the accident rates above into accidents per driver by figuring out how many drivers it would take to drive those 100-million vehicle miles. From the 2009 National Household Travel Survey[vii], Table 3 shows that drivers drive 28.97 miles per day on average. Then, over the course of a year, a single average driver drives 10574.05 miles total. Divide 100-million by this number, and we get that it takes about 9458 drivers to drive 100-million miles in one year. If we divide the average number of accidents at each speed by the number of average drivers it would take to drive that distance, we can approximate the accident rate (for a given number of drivers) at each speed deviation. Below are the results:

AccidentChangesPerDriver

For the rest of the analysis, I leave out the places where the accident rate is greater than one, since the approximation obviously doesn’t work well there. If you look at the “Change” column, you can see that once you get past 10 mph, the change in the accident rate is always greater than 5*0.0007 = 0.0035 (which is what the change should be if Travis County were setting the fines optimally i.e. if dr over ds actually equaled 0.0007). If we approximate the changes as a linearly increasing function of speed (OLS), we get that the accident rate increases by about 0.0113 for each mph over the average. Notice that this is much higher than 0.0007 (16 times higher, actually). The plot below should help you visualize how close these approximations are to the optimal.

Aratevisualization

The Travis County optimally assumed accident rates (given their fines) are close to zero, which, as you can see, means that the conditions for them to be acting optimally are far from both the actual accident rate and the crude linear optimal approximation of the accident rate (except for maybe at low-speed deviations). With this evidence, I think it is safe to say that the Travis County speeding fines are not optimal. For many speeding violations, the fines will be too low to account for the increased risk of accidents associated with speeding.

So what should the fines be? Like I said before, I don’t know the optimal base fine, but if we want to optimally account for the increased risk of those who do speed we can describe how the fines should change as your speed increases. We just use the optimizing conditions from the theoretical model:

Eq9

We have estimates for c, p, and we can use the “Change” column in the previous table as our dr over ds in the equation. In the table below, I have calculated the optimal changes in speeding fines and the resulting fine schedule, assuming that the base fine of $105 remains the same:

OptimalFinesChart

Weirdly enough, the actual speeding ticket cost at 15 mph is about where the optimal fines and the actual fines intersect (highlighted above). Actual speeding ticket costs for people going more than 15 mph over the limit, however, are much lower than the optimal fines. This is a result of Travis County’s (implicit) assumption that driving 5 mph faster always increases the risk of accident by the same amount. But, as we’ve seen from the data, the increase in accident rate depends on how fast you are already speeding, and it increases very quickly. For example, changing your speed from 20 mph over to 25 mph over almost triples your risk of getting into an accident, so the optimal fine for speeding at 25 mph over is almost triple the fine at 20 mph. Focusing on only the positive speed deviations, we can compare the optimal fines to the actual Travis County fines:

OvsAspeedingfines

While the Travis County fines are (relatively) close approximations at low speed deviations, they are not at all close to the optimal fines at high-speed deviations (anything over 15 mph). Why does this matter? Because it means that Travis County is not accounting for the danger of speeders to society at exactly the speeds where speeders are most likely to actually cause accidents. It also means that it is overcharging speeders at low-speed deviations, where speeding is least likely to cause damage.

While it may technically be optimal to set the fine for going 30 mph over the limit at more than $6,000, it may not be possible (politically). But who really needs to speed by 30 mph? Shouldn’t we want to deter that person from doing that? A $6,000 fine would certainly accomplish that.

One last thing to consider: it is not clear that Travis County is actually trying to act optimally in the way we described. It might be that Travis County is trying to maximize revenue, rather than maximize revenue minus societal costs of accidents. This would have implications for what the government thinks is “optimal”, and it might mean that Travis County would want to keep high-speed fines lower so that more people speed and get caught, leading to more ticket revenue. There is also literature to suggest that local governments actually use speeding tickets as a way to make up for lost tax money during recessions[viii]. I mentioned these alternative objectives just to point out that other models might better describe how Travis County set its speeding fines. Or it might just be that the fines were made up off the top of someone’s head (a likely scenario, I think).

Besides helping the government, I also hope this helps drivers who are considering whether speeding is worth it. I don’t believe people include the cost of getting into an accident when they choose how speedy they should speed, and this is probably not a big deal for them usually – the average driver gets into an accident every 18 years[ix], so the probability is really low per trip (about 0.00005). But, the risk of having an accident increases extremely rapidly as you speed more and more. At low speeds you’re probably ok not including expected accident costs, but at the upper end you might want to consider the increased risk of crashing into someone.

_______________________________________________________________________

[i] http://onlinemanuals.txdot.gov/txdotmanuals/szn/szn.pdf

[ii]https://research.stlouisfed.org/wp/2006/2006-048.pdf

[iii]http://safety.fhwa.dot.gov/speedmgt/ref_mats/fhwasa1304/Resources3/40%20-%20Accidents%20on%20Main%20Rural%20Highways%20Related%20to%20Speed,%20Driver,%20and%20Vehicle.pdf

[iv]https://www.traviscountytx.gov/justices-of-peace/jp1/court-costs

[v]http://www.rmiia.org/auto/traffic_safety/Cost_of_crashes.asp (note: this is the cost of automobile damage from an accident, and doesn’t include the costs of personal injuries or death. I didn’t include these costs because of the many complicated factors that go into the process of estimating the true “value” of a life and of injuries. These cost estimates will be “low” then, in the sense that they will tell us the lower bound estimates of the optimal fines.)

[vi]https://www.thezebra.com/insurance-news/315/speeding-ticket-facts/

[vii]http://nhts.ornl.gov/2009/pub/stt.pdf

[viii]https://research.stlouisfed.org/wp/2006/2006-048.pdf

[ix]http://www.foxbusiness.com/personal-finance/2011/06/17/heres-how-many-car-accidents-youll-have/

________________________________________________________________________

PDF Version

Data

Advertisements

Thought: At What Speed Should You Speed?

4/16/15

By Kevin DeLuca

ThoughtBurner

Opportunity Cost of Reading this ThoughtBurner post: $2.28 – about 10.4 minutes

While I’m sure that many of you readers are outstanding citizens who would never ever even dream about ever breaking the law ever, I know that some of you are natural-born rebels and straight‑up gangsters that look at the list of minor traffic violations[i] and say, “Nah, Imma do me.” Speeding ensues.

Most people will not drive faster than they feel comfortable driving, but the prevalence of speeding tickets suggests that often times people’s maximum comfortable driving speed is above the set speed limits. According to the info graphic on this webpage[ii], 20.6% of all drivers will get a speeding ticket over the course of a year, costing them an average of $152 per ticket. And about 41 million people get speeding tickets each year cumulatively bringing in over $6 billion in revenue from fines.

If people are making an appropriate cost-benefit analysis when they decide to speed, then these facts mean that the time saved from speeding over the course of a year is worth at least $6 billion dollars. But I don’t think that most people are actually doing any calculations before they make their decision to speed, so there is a large potential for inefficiency – I suspect people are speeding in a non-optimal way.

There are costs and benefits to speeding, and the prevalence and price of speeding tickets are non-trivial, so ThoughtBurner is here to help you out. In Speeding Part 1, I will attempt to answer the question: at what speed over the speed limit should you, the driver, speed? In Speeding Part 2, I will help the government out by helping them decide how expensive a speeding ticket really should be (sorry everyone).

THE DRIVER’S PROBLEM

So, you want drive somewhere and you’re wondering if speeding there to save some time is worth risking getting a speeding ticket. How do you decide? Before you can make an informed decision, you need to know a few things:

  • The time you will save by speeding
  • The (subjective) value of the time you will save by speeding
  • The probability that you will get caught speeding
  • The cost you will have to pay if you get caught speeding

The time you will save by speeding is fairly easy to calculate; distance divided by (speeding) speed, minus the distance divided by (non-speeding) original speed. The trick is, for each mile per hour over the speed limit you travel, you don’t save the same amount of time. An easy example: say it takes you 10 minutes to get somewhere going 20 mph. If you go 40 mph, you’ll get there twice as fast – in 5 minutes, which means you’ll save 5 minutes of your time. If you go 60 mph, you’ll get there three times as fast – in 3.33 repeating (of course) minutes, which means you’ll save 6.66 repeating (of course) minutes of your time. The first 20 mph over the limit saves you 5 minutes, but the next 20 mph over the limit only saves you an additional 1.67 minutes.  If you do some calculations, you can see that the higher the speed limit, the less valuable speeding is (in terms of time saved). But in general, we can show the relationship graphically, holding distance traveled constant:

Time Saved vs Speed

The subjective value of the time you will save by speeding is a bit trickier. It probably depends on a lot of different things – like how busy of a person you are, how late you are already running, how much you subjectively don’t like driving, etc. – and so I can’t know the actual level of your subjective value of the time you will save. But, I think it is safe to assume that the subjective value of time saved, or the utility of time saved – U(s) – is diminishing. For example, imagine that you are running 5 minutes late and you are deciding whether to speed to save 5 minutes or not speed (save 0 minutes). Those 5 minutes could be very valuable. Now imagine that you are 5 minutes late and you are deciding whether to speed to save 35 minutes or 30 minutes (yeah, you’re going, like, super-fast). When you are already saving a lot of time from speeding, e.g. 30 minutes, then saving an additional 5 minutes isn’t really worth much. Even if the utility from time saved wasn’t diminishing, actual time saved is diminishing as you go faster and faster, so the value of time saved will be diminishing as well. We can represent this graphically too:

Utility vs Speed

The probability of getting caught is another very tricky number to estimate. I spent a lot of time thinking about more sophisticated models, where the probability of getting caught is a function of the distance you are traveling, the speed at which you are going, and other factors, but I think that for our purposes the best thing for us to do is try to estimate the probability of getting caught per trip. I will provide actual estimates later, but for now let us call this constant probability of getting caught “p”. We can expect, given that only 1/5th of drivers actually get speeding tickets every year, that p is probably small.

Last, we need to know the penalty for getting caught. I will consider the case where there is a base fee for speeding, plus a fine that increases depending on how fast over the speed limit you were traveling, as is the case in Travis County (Austin, Texas)[iii]. This is not actually super common – I will consider alternate fine schedules later, but this particular case leads to an easier solution. The cost of the speeding fine, then, is linearly related to how fast you are speeding:

Fine vs Speed

If multiply the graph above by p, we get a graph of the expected costs of speeding; the amount you have to pay if you get caught weighed by the chance that you actually get caught.

Expected Costs vs Speed

We can combine all this information to solve what I will call the driver’s maximization problem, which is: maximize the benefits from speeding minus the expected costs of speeding. More precisely, the problem is:

DriversProb

Where U(s) is the driver’s subjective value of time saved (Figure 2) and E[C(s)] is the expected cost of speeding (Figure 4). Notice that the driver is choosing at what speed to drive which, in this model, will determine the value of both U(s) and C(s).

I will now consider two possible types of drivers: Punctual Perry and Lackadaisical Lucy.

  • Case 1: Punctual Perry

Punctual Perry doesn’t like missing out on anything. So, he always plans ahead and makes sure to leave early whenever he has to drive anywhere. Speeding and saving time doesn’t really give him much value, since he’s never really rushed for time. When Punctual Perry plots his driver maximization problem, it looks like this:

Punctual Perry

Notice that Punctual Perry never subjectively values his time saved more than the value of the expected cost of a speeding ticket for any given speed. This is because Punctual Perry is true to his name (punctual); he doesn’t need to save time since he’s always on time, so saving more time isn’t very valuable to him. He is more worried about the expected cost of the hypothetical ticket than saving a few extra minutes.

We don’t even have to do any math (yay!) to see what the solution to Punctual Perry’s driver maximization is: don’t speed. The expected costs are always higher than his benefits, so any amount of speeding leads to negative values for the driver maximization problem. If he doesn’t speed, there are no benefits but also no chance of getting caught speeding, and since zero value is better than negative value, Punctual Perry just never speeds.

  • Case 2: Lackadaisical Lucy

Lackadaisical Lucy is a more interesting case. She is typically late to things, which means that time saved by driving a little faster is more valuable to her compared to time saved by Punctual Perry. When Lackadaisical Lucy plots her curves for the driver maximization problem, it looks like this:

Lackadaisical Lucy

For Lackadaisical Lucy, there are speeds at which the value of the time saved is greater than the expected costs at that speed. If Lackadaisical Lucy chooses the right speed, she can maximize the benefits from speeding conditional on expected costs. But what speed is the right speed? It is the speed at which the distance between U(s) and E[C(s)] is the biggest.

Refer back to the driver’s maximization problem. In order to maximize, Lackadaisical Lucy can simply take the derivative of the driver’s problem with respect to speed and set it equal to zero. Doing so results in:

FOCs

In words, this means that Lackadaisical Lucy should choose a speed, s*, where the additional benefit of speeding a little more is equal to the additional cost of speeding a little bit more (marginal benefit equals marginal costs). Graphically, this speed is shown on the graph at the point where the slope of Lackadaisical Lucy’s utility curve is equal to the slope of the expected cost curve:

Lackadaisical Lucy FOCs

The speed where the slopes are equal is the optimal speed that Lackadaisical Lucy should drive in order to maximize her utility. By choosing speed s*, Lackadaisical Lucy is maximizing the difference between the benefits of speeding and the expected cost of speeding, for some given distance. This is good, because Lackadaisical Lucy is a rational human being who wants to maximize her utility.

In summary, this model predicts that some people (or all people in some circumstances) will decide not to speed when the benefits from speeding never exceed the expected costs (Punctual Perry). And people who decide to speed (Lackadaisical Lucy) should drive at a speed where the marginal expected cost of speeding is equal to the marginal subjective benefit of speeding.

So, if you find yourself being a Punctual Perry, then not speeding is the right choice. But what if you are being a Lackadaisical Lucy? How do you know how fast to speed? What is the actual marginal cost of speeding?

EMPIRICAL ESTIMATES

Using the simple model developed above, I will now provide some empirical estimates to help you all solve your own driver maximization problems.

I’m guessing that, in real life, many people are Lackadaisical Lucys in the sense that there is a point where they value saving time more than the expected cost of speeding, though not necessarily just because they are always late. For example, they could just hate driving a lot so that speeding to drive less is worth the risk. Regardless of their reason, these types of drivers can use the model developed above to determine their own individualized solution to the driver maximization problem whenever they want to go somewhere.

The subjective value of time saved is all about you guys, and it could vary pretty widely across individuals (Punctual Perrys vs. Lackadaisical Lucys). Also, remember that the amount of time saved – and therefore your utility gained from it – depends on the distance you are traveling. But the probability of getting caught and the costs of traffic fines faced by everyone are the same, so I’ll first focus my attention on providing some guesses of these values.

Start with the statistics from above that says 20.6% of all drivers get a speeding ticket each year. The most recent estimates from the 2009 National Household Travel Survey[iv] put the average daily number of vehicle trips per driver at about 3 a day (see Table 3). This means that each driver makes (3*365) 1095 car trips every year. There is a 20.6% chance that at least one of those trips will result in a speeding ticket. Which also means that there is a 79.4% chance that none of those trips will result in a speeding ticket. If we let p equal the probability of getting a speeding ticket per trip, then it follows that (1 – p) is the probability of not getting a speeding ticket. Then:

ProbEstimate1

The left hand side is the probability of not getting a ticket 1095 trips in a row, and the right hand side is the observed proportion of people who don’t get a speeding ticket each year. Solving for p gives:

ProbEstimate2

That is, the probability of getting a ticket per car ride is one minus the 1095th root of 0.794, which comes out to be about 0.0002. While this may seem really low, it is.

People drive a lot, and considering that drivers also probably have strategies for avoiding speeding tickets (e.g. don’t speed on certain roads where cops hang out), it is not that surprising to me that the probability of getting caught is so low. You actually have to be pretty unlucky to get a speeding ticket.

Using the fine schedule from Travis County, we know exactly how much the speeding ticket will cost you at any given speed over the speed limit. It is a $105 base fine, plus $10 per mph over the speed limit you get caught speeding[v]. Mathematically, this means that:

Cost1

Adjust this by our newfound predicted probability p, and you get:

Cost2

Remember that the maximizing condition is when marginal benefits are equal to marginal costs:

FOC2

Taking the derivative of E[C(s)] with respect to s:

CostDerivative

And substituting in gives:

FOC3

This means that, if you speed, you should speed at s* where the marginal benefit of speeding is equal to 2 tenths of a penny. Which is almost nothing. So, your optimizing speed will be very close to the speed at which you will no longer gain any benefits from increasing your speed. For example, based on my own previous estimates of opportunity costs[vi], the value of $0.002 is approximately equal to the value of 0.54 seconds of leisure time.

Imagine how happy you would be if I told you that you would have an extra 0.54 seconds today to do whatever you want! If you would be at least that happy by speeding a little faster, then you should do it.

The implication of this is that people acting optimally (in Travis County) should basically just completely disregard speed limits and drive at the fastest speed they feel comfortable driving – well, marginally below it. Only go 4.9999 mph over the limit vs. 5 mph over the limit.

The results initially surprised me. People who aren’t making any optimizing calculations are also probably getting really close to choosing the correct speed the model says they should speed. Since the probability of getting a speeding ticket per trip is so low, the expected costs are also very low and the marginal cost of increasing your speed is even lower (close to zero).  So just keep increasing your speed until the marginal benefit is close to zero as well. Graphically, it would look something like this:

Empirical Graph

Basically, yeah you were most likely already doing it right. Speed at the max speed you are comfortable driving, because the probability of getting caught is so low per trip that the marginal expected cost of speeding is less than half a penny. You were already acting optimally! Wow, brilliant.

Even if you change some of the assumptions that would lead to higher estimates of p, the results are essentially the same. For example, implicit in our estimate of p above is that people speed on every trip they make throughout the year and have a non-zero chance of getting a speeding ticket. However, it seems likely that this is not the case – people don’t always speed no matter what. So, let’s be extremely generous and assume that people only speed one days-worth of trips (3 trips) per month. Then we would have:

CostEstimate2

And, substituting in the new p with our optimizing conditions give:

CostEstimate3

Which means that you should speed until the marginal value of speeding is worth only $0.064 in time saved which, again based on my own previous estimates of opportunity costs, is about equal to the value of 17.5 seconds of leisure time. Again, imagine how happy you would be if I told you that you would have an extra 17.5 seconds today to do whatever you want! If speeding a little faster makes you at least that happy, you should do it (with the above specification).

In general, the strategy of just speeding at whatever speed you want is probably a very close approximation to the optimizing strategy. I originally thought I would end this post by telling everyone to speed less, but instead I think that in the spirit of ThoughtBurner’s mission I have to encourage you all to ignore speed limits. What have I done.

THE GOVERNMENT’S PROBLEM

The purpose of speed limits and speeding fines is to deter people from speeding. In this model, drivers acting optimally will basically ignore speed limits. This makes me wonder whether the City of Austin has set traffic fines high enough to deter speeding. For example, other states have speeding fines of up to $1,000[vii]. Perhaps there are other incentives that determine the schedule of speeding fines, such as revenue collection.

So, imagine now that you are a city government planner and you are tasked with determining how expensive speeding fines will be. How do you know you have chosen the right penalty amount? If drivers are basically ignoring speed limits right now because their expected costs are so low, then in order to get them to stop speeding either you need to increase the probability of getting a speeding ticket (hard way) or increase speeding fines (easy way). This will be the problem I will solve in Speeding – Part 2.

________________________________________________________________________________

[i] https://www.tdcj.state.tx.us/divisions/hr/hr-policy/pd-27a.pdf

[ii] https://www.thezebra.com/insurance-news/315/speeding-ticket-facts/

[iii] https://www.traviscountytx.gov/justices-of-peace/jp1/court-costs

[iv] http://nhts.ornl.gov/2009/pub/stt.pdf

[v] https://www.traviscountytx.gov/justices-of-peace/jp1/court-costs

[vi] https://thoughtburner.wordpress.com/2015/02/26/thought-the-value-of-reading-a-blog-post/

[vii] https://www.thezebra.com/insurance-news/315/speeding-ticket-facts/

________________________________________________________________________________

 PDF Version

Thought: Countries With Names That Sound Really Free Aren’t Actually Free

4/2/2015

By Boyd Garriott

ThoughtBurner

Opportunity Cost of Reading this ThoughtBurner post: $0.73 – about 3.3 minutes

What are some buzzwords to indicate that a country is free? A democracy? A government by the people? How about a republic?  If these are indeed the case, then it would seem that the freest country on the planet would have to be none other than the Democratic People’s Republic of Korea. For those that are less geographically inclined, that’s North Korea. For those that are even less geographically inclined, that’s the bad Korea.

So what gives; why does the most unfree country have such a free-sounding name? North Korea isn’t alone in being a country notoriously bad on civil rights with a very “free-sounding” official name. Think Democratic Republic of the Congo or People’s Republic of China. Could it be that countries are compensating for their definitive lack of freedom with a “free-sounding” name? A normal person would probably take this a face value and have a laugh at the irony, but we’re not normal people at ThoughtBurner, so I’ve done some statistical analysis to figure out whether countries with free-sounding names are actually more oppressive.

First off, to figure out how free or oppressive a country is, I used a dataset[i] from Freedom House of around 200 countries that are rated based on their political rights and civil liberties. I downloaded the 2015 dataset, aggregated those two numbers and then converted the result to a scale between 0 and 100 with 0 being not free at all (think North Korea) and 100 being totally free (think USA). This allows us to calculate percentage changes in freedom.

Second, to figure out how “free-sounding” a country name is, I used the formal names of countries (as opposed to the short names; think Democratic People’s Republic of Korea vs. North Korea) which are listed on Wikipedia[ii]. I then gave countries a point for every term they used that seemed to endorse freedom: any variations of “Republic”, “Democracy”, or “People’s”. A score of 0 (Canada) indicates that a country’s name makes no endorsement of freedom while a score of 3 (back to our friends in North Korea) indicates that a country’s name sounds like the preamble to the Constitution.

I then regressed these two numbers, and I found some interesting results. For every “freedom-endorsing” term in a country’s title, its citizens can expect to be 14% less free at a statistically significant level. To give you an idea of what that means, check out this chart below:

CountryNameFreedomChart

That’s right; as a country gets freer in name, it gets less free in reality. The average freedom for a country without any free-sounding descriptors is 69%, better than the world average of 61%. However, the average freedom for a country with three free-sounding descriptors – including the Democratic People’s Republic of Korea – is an appalling 11%. As a matter of fact, every country with that many free-sounding descriptors is classified as “not free” by Freedom House.

Even countries with just “Republic” in the name are, on average, 10% less free than countries without free-sounding descriptors. By the time that jumps to something like “Democratic Republic”, we’re talking 17% less freedom!

If that wasn’t ironic enough, consider this: countries with any variation of “king” or “kingdom” in their name are actually, on average, more free than countries with any freedom-sounding descriptor. That’s right: countries that explicitly endorse monarchy in their names are freer than those that explicitly endorse freedom. Granted, many countries with “king” in their name are modern European democracies like the United Kingdom and the Kingdom of Demark, so that explains some of the irony.

There are three important lessons to take from this.

  1. North Korea is the bad Korea (again, this one is more of a reminder for the less geographically inclined).
  2. Countries that sound really free usually aren’t that free.
  3. Yep, countries compensate for being terrible places by having nice-sounding names.

Wonky Stuff:

Regression of Freedom Score on Number of Freedom Descriptors

 FreedomRegression

Important Statistics

ImportantStatsTable

Further Explanation of the Methodology

I came to the conclusion to use the freedom descriptors that I did after quite a bit of thought. First off, the words “Republic”, “Democracy”, and “People’s” are pretty bold adjectives that describe a form of government that represents the needs of free citizens. It should also be noted that I used the search string “Democr” to get any name that described itself as “Democratic” as well. Similarly, I searched the string “King” which also included “Kingdom”.

Next, I actually put quite a bit of thought into choosing my freedom descriptors, so here’s some explanation on other contenders that I didn’t count. “United” seems to be an obvious contender, but it’s not a word that actually describes a free society; citizens under the rule of a dictator are “united”, but they certainly aren’t free. “State” also came up pretty often in the dataset, but a state is just a sovereign territory that doesn’t make any claim as to the type of government it employs. On similar grounds, I rejected using “Principality” or “Commonwealth”. “Federal” came up, but that describes multiple states under a central entity – nothing about freedom.

I used country’s official English names because… well… I don’t speak like a hundred languages.

Lastly, to be clear about the graph, the labels on the bottom are illustrative but still accurate. The true labels would be “0 Freedom Descriptors”, “1 Freedom Descriptors”, etc. However, I took the liberty of putting common country titles that illustrated the amount of “Freedom Descriptors” they corresponded to. For example, “Democratic Republic” is usually what a country with freedom descriptors looks like, but there are exceptions such as the People’s Republic of China. There are two freedom descriptors, but it doesn’t fit neatly into the graph. In the end, however, I think it presents the information fairly.

Boyd Garriott is ThoughtBurner’s Chief Contributor. Boyd received his undergraduate degree in economics from the University of Texas at San Antonio. He currently lives in Washington D.C. and will be attending Harvard Law School in the fall.

________________________________________________________________________________

[i] https://freedomhouse.org/report-types/freedom-world#.VRQhL_nF-So

[ii] http://en.wikipedia.org/wiki/List_of_sovereign_states

________________________________________________________________________________

Data

PDF Version

Thought: Cinderella’s Incredibly Small Foot

3/26/15

By Kevin DeLuca

ThoughtBurner

Opportunity Cost of Reading this ThoughtBurner post: $1.58 – about 7.2 minutes

I went to see the live-action remake of the Disney classic Cinderella the other day, and as I watched the prince’s men search all across the countryside for poor old Cinderella, it occurred to me that Disney characters, and especially the Disney royalty, tend to be a bit inefficient. Cinderella’s fine prince was the perfect example of sub-optimal Disney behavior. He wanted to find Cinderella after the royal ball, so he ordered his men to take the left-behind glass slipper and try to put it on every single maiden in the kingdom. This is probably the most inefficient way to find someone that I have ever heard. Also, this plan only would have worked under very specific, and fairly unlikely, conditions, which I will explain below.

Of course, it is up to ThoughtBurner to advise the creative Disney “imagineers” on ways for them to not only provide the impressionable children of the world with lessons about love, life, and happiness, but also to teach the value of efficiency in the most economic sense. And yes, for those of you who were unaware, implicit in ThoughtBurner’s mission to improve efficiency and happiness in people’s everyday lives lies the responsibility of also improving the efficiency and happiness in imaginary people’s everyday lives.

Cinderella’s Incredibly Small Foot

The prince’s plan to find Cinderella was, as I mentioned above, to take the glass slipper around the countryside and try it on every woman in the kingdom. Whoever the slipper fit would be the woman that the prince would marry. The prince is making a huge assumption here: that no other woman in the kingdom has the same foot size as Cinderella. In the movie, it shows the prince’s men trying the glass slipper on many different women, and it never seems to fit! And, for most of the women the shoe is too small – this implies that Cinderella’s foot is very small. How small would Cinderella’s foot need to be in order for the prince’s assumption to be correct, you ask?

First, we need to know the distribution of women’s foot sizes. This is actually kind of hard to find, and I could only really find data on the distribution of Japanese feet online[i]. Because Cinderella isn’t Japanese, instead I found stats on female shoe sales [ii]– the data includes the quantity of each shoe of each size that was sold in the United State in 1998. If we assume that women buy shoes that fit their feet, then the distribution of the sizes of female shoes sold should reflect the distribution of women’s foot sizes fairly accurately. Below is the distribution of female shoes sold by size:

ShoeSizeChart1

Female shoe sales by size seems to be normally distributed, which is great for us because we know a lot about the properties of things that are normally distributed (makes using statistics really easy). As you may have noticed, all of the half-sizes are all systematically lower than their whole size counterparts, which makes the distribution less nice. I’m guessing that this has a behavioral explanation (people like to buy whole sizes more than half sizes, for some reason? Update: after I published this I heard from friends that some women’s shoe stores/brands don’t sell half sizes, which is a more likely explanation for the pattern we see). If you plot only the sale of whole sizes, it looks even more like a normal distribution:

ShoeSizeChart2

Alright so, now that we know the distribution of foot sizes, we can figure out how small (or big) Cinderella’s feet needed to be in order for the prince’s plan to work. Using the same numbers that I used to create the graphs (see data link at end), I calculated the average female foot size to be 8.076 with a standard deviation of 1.468. (These are women’s shoe sizes, not inches).

Now, before we can calculated the size of Cinderella’s foot we also need to know how many people were in the prince’s kingdom. Let’s first make a few more assumptions that will favor the prince’s plan. First, assume that the prince does not actually try the slipper on every woman in the nation; instead, he stops as soon as he finds Cinderella. Also assume that the first and only woman who fits the glass slipper is actually Cinderella. The prince doesn’t know anything about how early or late in the process of shoe-trying-on Cinderella will be fitted, so we’ll assume that every woman has an equal chance of being the next woman to try the shoe on. This means that the prince is expected to try to put the shoe on half of the women in the kingdom before he finds Cinderella.

Perhaps not surprisingly, there doesn’t seem to be much data on the population size or demographic characteristics of magical fairytale kingdoms. I will use the next best substitute, the imaginary medieval worlds of gamers, on which there apparently has been research done on the populations of towns, villages, and cities or kingdoms. Thanks to S. John Ross and his page with some guesses of the population density of kingdoms (which seem to be based on real cities[iii]), we can provide a range of estimates of the size of the kingdom that Cinderella lived in, and according to that we can see how small her foot needed to be in order for the prince’s plan to work.

Remember that about half of the population will be women, and that the prince only has to try the slipper on half of these women before he is expected to find Cinderella. In the chart below, I calculated how many women need to go through the slipper test before Cinderella would be found. Next, I found which percentile Cinderella’s shoe size would need to be in order for her to be the only person with that shoe size in that kingdom. Next, I converted the percentile to a z-score, and last I calculated how small (or big) her shoe size would have needed to be by subtracting (or adding) z-score-many standard deviations to the average shoe size.

CinderellaFootChart

As you can see from the chart, even with a ‘kingdom’ as small as 300 people, Cinderella’s shoe size would have to be in the bottom 1.3% of the population, and she would be a size 4.82. To put this into perspective, a woman’s size 4 is barely over 8 inches[iv], and most vendors don’t sell below a woman’s shoe size 4. If the kingdom had a population of around 50,000 (about one-sixteenth the size of Austin[v], maybe not unreasonable for a “kingdom”), Cinderella would have to have a shoe size in the bottom 0.007% of the population, which corresponds to a size 2.5 (which I think means she would have to buy kid shoes). If Cinderella had lived in New York City, with a population of 8.4 million[vi], she would need a shoe size of 0.88. While possible that Cinderella just had really really small feet and the prince just had a really really small kingdom, it seems highly unlikely that the prince’s plan would have worked. On top of that, the prince’s men are also going to be tied up traveling the kingdom. Assuming that these men are compensated for their work by the prince, I’m sure that the people of… wherever… would not be happy to know their taxpayer money was being wasted on such an inefficient manhunt.

I will now prescribe a much simpler way to find Cinderella, and show you that he could have found Cinderella much more quickly for only a fraction of the cost of his original method. Obviously, the prince should not try the shoe on every woman. All I suggest to the prince is that he test only blonde females in his kingdom.

(Note: In the new movie, Cinderella always has blonde hair. I was looking through pictures of the old Disney film online, and Cinderella’s hair color seems to change between blonde, dirty blonde, orange, and brown, and I can’t tell if this is representative of Cinderella’s true hair color or if we have another black-blue white-gold dress thing going on. All of the “modern” pictures of Cinderella[vii] go with a blonde-haired version. And according to this Cinderella wiki[viii], Cinderella’s hair color is “Strawberry-Blonde.” I will therefore continue with the assumption that Cinderella had blonde or a darkish blonde hair color, though I admit that if this assumption fails then my strategy would need to be slightly modified.)

How many women in the kingdom have blonde hair? This would normally be impossible to know, except for the fact that the prince actually invited every woman in the kingdom to his ball. This is perfect for us, because we have the entire population of interest trapped inside the palace for random sampling. Also, conveniently, Cinderella was the last one to arrive to the ball so we know that there is no selection bias (if you were worried that hair color was correlated to promptness or something). So, let us take a random sample of women from the royal ball and see how many are blonde. Here is a snapshot of the ball from the new film:

CINDERELLA

Notice anything? Including Cinderella, I count only 4 (possibly) blonde women out of the 19 pictured. Since this is a random sample of all of the women in our population of interest, we expect the mean from the sample to equal the mean of the population. So, in this kingdom only about 4 out of 19 women, or 21% of women, should have blonde hair. (If you use pictures of the ball from the old animated film, even less of the women are blonde.)

Simply by testing only blonde women, the prince would have spent only 21% of the original total cost if the rest of his plan had been exactly the same. In other words, I would have reduced his costs by about four-fifths with a super obvious modification to his strategy. To save even more on costs, he could have put out a royal decree saying something like: “I will compensate travel costs for any blonde woman who comes to my palace if she fits into this glass slipper.” Then, only Cinderella and other blonde women who think they might fit into the glass slipper would have any incentive to go the palace to try on the slipper. If Cinderella is the only one who fits the slipper, then the prince will just have to pay whoever takes Cinderella to the palace for the travel costs. If other women fit the slipper, then the prince will have to use something other than shoe size to identify Cinderella and may have to pay additional costs of transporting false positives. In either case though, I suspect it would be less expensive than having his men travel the countryside testing every blonde woman one at a time. (I know that this wouldn’t have actually worked in the movie since Cinderella is locked in the attic or whatever, but the prince’s original plan wouldn’t have worked either for the same reason. My method still has much lower expected costs.)

Also, by using my plan, the prince would have found Cinderella in about one-fifth the time it took him in his original plan, assuming no fixed time costs. If he wanted to find Cinderella as quickly as possible, he could have sent out a single man to each household with instructions to bring any blonde women in the household to the prince’s castle for a shoe fitting. Rather than taking the glass slipper to each house, just bring all the possible Cinderellas to the palace in one fell swoop and test the slipper on them. I’m not sure if this would be cost effective, but compared to his original plan he could probably use all the money I saved him to justify any additional costs of getting his Cinderella sooner. And isn’t that more romantic too? What princess wouldn’t love a prince who in addition to doing everything he could to find her as quickly as possible also did it while minimizing expected costs?! I can see the headline now: “Kingdom’s Tiniest-Footed Woman Marries Cost Minimizing Prince In Record Time!”

________________________________________________________________________________

[i] http://www.dh.aist.go.jp/en/research/centered/foot/

[ii] Footwear Impression Evidence – Page 191: http://books.google.com/books?id=xLVUjzkK3rgC&pg=PA191&lpg=PA191&dq=191&prev=http://print.google.com/print%3Fie%3DUTF-8%26q%3DThe%2BProfessional%2B%2522Shoe%2BFitting%2522&sig=asPShS7HyeGpZ-jWulrJSJIZh8E&hl=en%23v=onepage&q=191&f=false#v=snippet&q=191&f=false

[iii] http://www222.pair.com/sjohn/blueroom/demog.htm

[iv] http://www.shoesizingcharts.com/

[v] http://en.wikipedia.org/wiki/Austin,_Texas

[vi] http://en.wikipedia.org/wiki/New_York_City

[vii] http://www.disneystore.com/animation-movies-entertainment-cinderella-blu-ray-and-dvd-digitial-copy-combo-pack-diamond-edition/mp/1320018/1000316/

[viii] http://disney.wikia.com/wiki/Cinderella_(character)

________________________________________________________________________________

Data

PDF Version

Thought: The “YOLO” Effect – Inappropriately Discounting The Future

3/12/15

By Kevin DeLuca

ThoughtBurner

Opportunity Cost of Reading this ThoughtBurner post: $1.46

You can hear the call coming from inside the gated frat house parties across college campuses every Friday night. The Lonely Island called it “the battle cry of a generation.”[i] Even in everyday conversation, hesitation, prudence, and caution are all greeted with a new, four-letter challenge: YOLO.

The expression “you only live once” (YOLO) is often meant to serve as a justification for risky or unwise behavior. The idea behind the message, however, is stronger than that. Because we only have one life to live – you only live once – we should not give up current opportunities. We should do what we want to do, when we want it, because you might not have the same options later as you do now (since we only live once). Similar phrases have captured this idea throughout history; “carpe diem”, “live life to the fullest”, and “live like there’s no tomorrow.”

Before I speak of the YOLO effect, I’ll introduce the common idea (in economics) of “discounting the future” in choice models. The basic concept behind discounting is that people value the present more than the future. Easy example: would you rather have $10 right now or $10 in 5 years? Most people say $10 right now, since $10 right now is more valuable than $10 in 5 years; you might need that $10 now and it’s not clear that the $10 will be as useful in 5 years. (see here[ii] for more). In order to account for this phenomenon, economists often place some discount factor on future payoffs in optimization problems.

In a very broad and abstract sense then, our total life happiness can be modeled as:

Happy1

Where each subscript (1, 2, …, T) indicates our happiness in some time period (until we die at period T), and where β is a discount factor between 0 and 1.

A β of 1 would mean that we value each period’s happiness exactly the same ($10 in 5 years is just as good as $10 right now), whereas a β of 0 would indicate that we don’t care at all about future happiness ($10 in 5 years doesn’t make you happy in the slightest).

Everyone is trying to maximize their total happiness – we are people and everyone wants to be happy. How do we do it? It looks like a straightforward problem, but the tricky part is that happiness in one period often affects happiness in future periods. Binge watching Netflix might make you extremely happy in period 1, but in period 2 (the next day, right before your exam) your happiness will could be very low (as you cram for your test). Maybe you can’t pull it off either, and you end up failing your test during period 3, and maybe eventually even failing out of school months later (period 100 or something). Your total happiness would have been much higher if you had suffered in period 1 and 2 – studying instead of watching Sons of Anarchy – and then receiving much higher payoffs in future periods.

How does this relate to only living once? In a sense, the phrase YOLO is an attempt to alter the value of discounting. It suggests that we make the present value of opportunities and payoffs much greater than any future payoffs. Another way to say this is that it places an inappropriate (as in non-optimal) discount factor on future payoffs (and consequences). To an extreme, it suggests using a β of 0 (in other words, discount the future completely). This is “The YOLO Effect.” Rather than basing you decisions on some objective function that takes into account present payoffs and future payoffs, The YOLO Effect changes your optimization problem to the form:

Happy2

Since β = 0, which simply becomes:

Happy3

This makes the maximization problem very easy; in order to maximize happiness, simply maximize happiness in the current period, period 1 (H1). Though it may be true that in certain periods, the solution to your lifetime happiness problem is the same as the solution to the current-period maximization problem, it seems unlikely that this would happen very often. The more likely scenario is that you end up getting high payoffs in the current period (high H1) and get lower payoffs in the future (low H2 or even low H100). While it is true that, if you are lucky, the YOLO mentality can lead to extremely high payoffs ( could end up being really, really high), strictly speaking the resulting cumulative amount of happiness from YOLO-based decisions will not be higher than the cumulative lifetime happiness derived from decisions based on appropriate future discounting (which is not the case if β = 0). In other words, the YOLO effect is causing people all over the world to be cumulatively less happy over their lifetime.

In order to counteract the YOLO effect, I propose a new catch phrase: “Live like you’ll live an average lifespan conditional on your particular demographic characteristics!” Not as catchy, I admit, and unfortunately the acronym is a bit complicated (“LLYLAALCOYPDC” doesn’t exactly roll off the tongue). But, if you keep this phrase in mind whenever you make choices, and if you practice saying it enough to yell it quickly at parties and then proceed to do something responsible, you can ensure that you and your friends are maximizing your expected cumulative lifetime happiness – even when the YOLO effect is at its strongest.

(A bit of an aside – The Lonely Island song referenced at the beginning of the article actually takes YOLO to mean something like “Be extra careful with your life, because YOLO.” They meant this sarcastically of course – that is not how the term is used colloquially. Their interpretation, however, implies that you should instead maximize your expected life time. Notice that this is not the same as maximizing your happiness and is, in fact, also a suboptimal strategy (even though you may live longer). A longer life is not always a better life. It does not necessarily lead to inappropriate discounting of future happiness as I’ve described above, but it does cause suboptimal decision making, if your intention is to be as cumulatively happy as possible.)

But maybe a phrase meant to counteract the YOLO effect doesn’t actually help anyone to make decisions (other than to, potentially, stop them from doing something that is obviously suboptimal). The information you need, you might be thinking, is really 1) how long you are expected to live and 2) how much you should discount the future – the ‘appropriate’ discount rate.

Let’s start with the first bit. Using the Social Security website life expectancy calculator[iii], you can enter some information and it will predict how long you’ll live (it predicted that I would live an additional 59.2 years, not bad I guess). The U.S. Census Bureau has some interesting tables[iv] as well to help you figure out how much longer you can expect to live. This[v] website also has a bunch of different life expectancy stats, though I’m not sure how reputable it is based on its underground-esque appearance. Of course there is a lot of uncertainty, so I won’t claim that the ‘average age’ or ‘life expectancy’ is at all close to your expected lifetime – especially if you say “YOLO” a lot. In terms of estimating your expected lifespan, it might be best to ‘start’ at the average life expectancy, which is 78.8 years (for the United States[vi]), and then adjust from there based on your healthy/unhealthy habits, your family history, medical conditions, your ability to make wise decisions and so on.

Once you have a good idea of how long you have left to live, and have come to terms with your own mortality, you now need to figure out how much you ought to discount the future. This, as you may imagine, is not an easy question to answer. There have been studies that empirically try to estimate how people discount the future, and much of the data fits a hyperbolic discounting model (see this[vii] paper, section 4.1 and 5.1) where, generally:

Happy4

With t being time and  being some constant indicating the strength of the discounting. The implication of hyperbolic discounting is that your discounting factor becomes smaller as you consider different situations that are further away in the future. For example, suppose you are comparing the current period (say t = 0) with the next (t = 1) (also assume that  = 1 for simplicity). You would discount today’s happiness by  = 1 (so, not at all) and the next period’s happiness by  = 0.5, or half! So if you receive $100 today you value it at $100, and if you receive $100 tomorrow you value it at only $50 (in current value). In other words, $100 tomorrow would only make you $50-worth happier today. But now let’s compare t = 100 to t = 101. This gives  0.0099 for day 100 and about  0.0098 for day 101. So if you receive $100 on day 100 you value it at about $0.99 (in current value), and if you receive $100 on day 101 you value it at about $0.98. The difference between the two times is the same as the first example – just 1 day – but the difference in your valuation is only $0.01, rather than $50.

Now, we don’t know that these empirical studies necessarily reveal anything about the “best” way to discount the future – the people studied may not have been acting optimally. At this point, however, there is no consensus among economists as to which model of future discounting is optimal, or even which is most accurate. We don’t know whether these people were maximizing their lifetime happiness or whether they were following some other behavioral strategy, but if they were acting optimally then that’s good news for hyperbolic discounting models. Hyperbolic discounting seems to work well to explain the observed data in general, and may be a good, easy approximation.

I’m usually against using rules of thumb or heuristics to solve these sorts of problems, but I actually think that using my anti-YOLO prescribed saying, “Live like you’ll live an average lifespan conditional on your particular demographic characteristics!”, actually works pretty well. It essentially prompts you to consider the cumulative lifetime happiness maximization problem rather than the current-period maximizing problem (which is usually suboptimal). When you say this in your head or out loud, you should be considering:

-The potential for tradeoff in payoffs between time periods

-Your expected lifespan

-How payoffs may affect total lifespan

-How your preferences may change in the future (anticipate your future needs and try to predict how happy certain things will make you in the future as you grow older)

-How to appropriately discount the future, and

-The amount of uncertainty and risk surrounding each of your estimations (how wrong you could be and the consequences of being wrong)

Unfortunately, many of these things are person and preference dependent, so it is hard to give answers or advice that is any more specific. Ultimately, I am leaving you to figure out the solution to your own lifetime happiness maximization problem. Your success depends on how well you can accurately and optimally incorporate all of the things I mentioned above into your optimization problem formation, in addition to how well you can resist the allure of the instant gratification offered by the YOLO effect. For this problem, you are in the best position to estimate the maximizing solution, since you have the most information about yourself and how it might affect the form of the problem.

Essentially what I am saying – and I think that this creed is one that many economists would adopt in problems that rely so heavily on the individual – is one thing: “You do you.”

________________________________________________________________________________

[i] https://www.youtube.com/watch?v=z5Otla5157c

[ii] See http://en.wikipedia.org/wiki/Temporal_discounting for more.

[iii] http://www.socialsecurity.gov/OACT/population/longevity.html

[iv] http://www.census.gov/compendia/statab/cats/births_deaths_marriages_divorces/life_expectancy.html

[v] http://www.worldlifeexpectancy.com/usa/life-expectancy

[vi] http://www.cdc.gov/nchs/fastats/life-expectancy.htm

[vii] http://www.cmu.edu/dietrich/sds/docs/loewenstein/TimeDiscounting.pdf

________________________________________________________________________________

PDF Version

Thought: The Value Of Reading A Blog Post

2/26/15

By Kevin DeLuca

ThoughtBurner

Disclaimer: the estimated average opportunity cost of reading this ThoughtBurner post is $1.22

I’ve been wondering whether blogs are valuable or not, so I decided to construct a way to approximate the opportunity cost of reading a single post. By reading any of these posts you are, of course, foregoing the opportunity to do other things – more productive things. If you’re reading this post at work, for example, you could instead be working and making money; if you’re reading this post in your college class, you could instead be paying attention to the lecturer whose salary you’ve already paid. You could also be reading this during you leisure time, in which case you are not losing money (by not working), but instead you are choosing to spend your leisure time (which has some value to you) reading this post (still a cost).

Since I have greater-than-zero readers, I can assume that the blog isn’t totally worthless. In fact, everyone who reads a post on this blog is at least willing to pay the opportunity cost of reading it. This means that your expected value of reading a post is at least as big as the opportunity cost, otherwise you wouldn’t read it in the first place. So, the opportunity cost of reading a post can be thought of as a conservative estimate of the value of any blog post.

But how much is the opportunity cost? Well, the answer depends on who you are and when you read the post. The opportunity cost of reading this while you should be working is much higher than if you read this during your free time. It also obviously depends on how much time it takes you to read a post; this is just a function of how many words there are and how fast you can read. This post has about 1,659 words, and on average, people read at somewhere between 250 and 300 words per minute (wpm)[i]. Next, all we need is a guess at how valuable your time is under certain scenarios, given that you could be doing something else.

For the sake of simplicity, I will first consider three different cases: an average U.S. worker reading this specific blog post during their work hours, an average worker reading this post in their leisure time, and a resident undergraduate student at The University of Texas reading this during class (many of my readers are my friends and many of my friends are or recently were UT undergrads). I’ll also base my estimates solely on the opportunity cost of reading the article, though a more accurate estimate would include time costs of analyzing graphs and looking at pictures and exploring the other articles which I’m sure you’re all doing.

  • If you’re an average worker in the U.S. reading this while on the job:

Assume you make the median income in the United States, which is $51,939[ii]. Also, assume that you work an average number of work hours each year, which is 1,788[iii]. That means that you earn about $29.05 per hour worked, or about $0.48 per minute. Every time you read a blog post, therefore, your opportunity cost is $0.48 for each minute you read. Let’s assume that you’re a fast reader (300 words per minute) for more conservative estimates. Since this blog post is 1,659 words long, the opportunity cost of reading this article to an average U.S. worker is about $2.67. (Notice, however, that you still get paid for the hour worked, even though you stopped working to read this post. In a way, you have passed the opportunity cost onto your employer, who was technically paying you to read a blog post for a few minutes. Don’t worry, I won’t tell.)

  • If you’re an average worker in the U.S. reading this outside of work hours (leisure time):

The value of leisure time is a tricky thing to estimate – it isn’t directly measureable and partly depends on your income. After reviewing this research paper[iv] by Larson and Shaikh (2004), it looks like estimates put it around $11.27 per hour (about $0.19 per minute) on average (see Table 4). This is probably a low estimate for working people and a high estimate for college students, since the former are more valuable than the latter (no offense). Using this estimate and average reading speeds (300 wpm), the opportunity cost of reading this blog post (during leisure time) to an average U.S. worker is about $1.04.

  • If you’re an undergraduate resident college student reading this during class:

Probably not everyone who reads this is a working professional, and since I’m currently attending UT and have a lot of friends here I’m guessing that they’re mostly the ones reading this. College students’ time needs a different value estimate, since they aren’t foregoing work to read each article. If they are reading this during one of their classes, however, they are not taking advantage of something that they’ve already paid for. In this sense, they are losing money, and this is their opportunity cost.

I’ll assume that you pay the in-state, full-time student tuition. The lowest tuition charge is in the college of liberal arts at $5,047, so we’ll use that to form a lower bound of the estimate. Let’s assume that on average students take 15 hours, which means they’re in class for 15 hours per week. Each semester is 15 weeks, which comes to 225 hours per semester. The resulting cost per hour of attending a class at UT is $22.43, which I’ll round down to $22 an hour, or about $0.37 per minute. Again, we’ll use 300 words per minute, and this blog post is about 1,659 words long. In this case, the opportunity cost of reading this blog in class to a UT student is about $2.07 (and it will be higher for non-residents, non-liberal arts majors, and people who take less than 15 hours).

Not too bad! Looks like this ThoughtBurner blog post is worth somewhere between $1.04 and $2.67.

 

A Deeper Look:

To explore further, I created a few charts that provide opportunity cost estimates under a larger range of conditions. For example, as I mentioned above the economics literature suggests that people’s leisure time is valued differently depending on your salary or hourly wage. Also, I take into account the fact that different majors at UT pay different amounts for tuition, and out-of-state tuition is much higher than in-state tuition. Lastly, I provide estimates of the opportunity costs of reading other things, like a random article from The New York Times, a random Politico article, and the cost of reading certain books. The results follow.

Workers, In Detail:

Below is a chart of the opportunity cost of work and leisure time for each quantile of income earners in the United States[v] in both hours and minutes, and I’ve specifically calculated the cost of reading this post for each group. The calculations are made assuming a 300 wpm reading speed and working an average number of hours each year (1788). Leisure costs are from Table 4 in the paper referenced above.

Table1

UT Students, In Detail:

Below is a chart of how the opportunity cost of reading this blog differs across majors and in-state vs. out-of-state students. The calculations are based on a 300 wpm reading speed and a student who takes 15 hours (15 hours per week in class)[vi].

Table2

Table3

Since tuition costs are pretty similar given resident status[vii], there isn’t much of a difference across majors, but notice that the opportunity cost of reading this blog in class for out-of-state students is almost four times higher (because their tuition is almost four times higher). Also, the hourly opportunity cost for students is of interest because it could be thought of as the opportunity cost of skipping a one-hour class. On average, this amounts to $23.56 an hour for resident and $80.68 for non-residents. Moral of the story: DON’T SKIP CLASS! If you’re a UT student, you’re paying quite a lot per hour to be educated.

Popular Readings:

For the popular readings, I calculated the average opportunity cost for workers both during work hours and during leisure time for each item in the chart.

Table4

The NY Times article and Politico article were just the most recent links I found among my Facebook newsfeed. The Politico article was only 378 words, and the NY Times article was an opinion piece and was 1,180 words total. The average book, according to this[viii] article, has 64,531 words. And the entire Harry Potter series has 1,084,170 words according to this[ix] website. For those of you who have read the entire series during your free time, just know that it cost you nearly $800 worth of your leisure time.

It seems unlikely that you would read an entire book much less the entire Harry Potter series while at work, but, let’s be real, people read Politico and the NY Times all the time during work hours. Now we all know how much it costs our employers when we ‘take a break’.

Of course, you don’t actually know the exact opportunity cost of reading a blog until after you’ve spent the time to read the post. In the name of efficiency, I think from now on I will put the estimated opportunity cost of reading each post right under the title, that way everyone has better information before making their decision. Now, the problem I’m stuck with is how to get people to just direct deposit the money rather than reading ThoughtBurner posts.

________________________________________________________________________________

[i] http://en.wikipedia.org/wiki/Words_per_minute

[ii] 2013 data. http://www.census.gov/content/dam/Census/library/publications/2014/demo/p60-249.pdf (Table 1)

[iii] http://stats.oecd.org/index.aspx?DataSetCode=ANHRS

[iv] http://home.uchicago.edu/~sabina/Economic%20Inquiry.pdf

[v] http://www.census.gov/hhes/www/income/data/historical/household/index.html

[vi] In order to calculate weighted averages, student enrollment totals by department for in- vs. out-of state was calculated by assuming each department has the same proportion of students as the university overall. (79.7% in-state and 20.3% out-of-state).

Student Ratio Source: https://sp.austin.utexas.edu/sites/ut/rpt/Documents/IMA_S_StudentChar_2013_Fall.pdf

Enrollment Source: https://sp.austin.utexas.edu/sites/ut/rpt/Documents/IMA_S_EnrlColDeptLvl_2013_Fall.pdf

[vii] http://www.utexas.edu/business/accounting/pubs/tf_undergrad_longhorn_fall14.pdf

[viii] http://www.huffingtonpost.com/2012/03/09/book-length_n_1334636.html

[ix] http://www.betterstorytelling.net/thebasics/storylength.html

__________________________________________________________________________

PDF Version

Data

Thought: Daily Optimization

2/19/2015

By Kevin DeLuca

ThoughtBurner

We are constantly faced with optimization problems throughout the day: What will be the fastest way to get to work today? Do I go through the drive-through or park and go inside? Should I commit to reading this ThoughtBurner post or do something else in my free time? We want to be as happy as possible, which means that for each of these choices we try to choose the best or optimal decision. Luckily, we’re usually pretty good at figuring out the best course of action and live pretty efficient lives.

Suppose, however, that at every little decision you made during the day, you chose a less-than-optimal choice. You would have been happier doing something else, but for some reason you didn’t take the time to think hard enough about your decision. You made a mistake – you took the busier route to work today. A single optimization error may not make much of a difference, maybe only a few wasted minutes of your life. But imagine that out of the estimated thousands of decisions we make every day, 10 or 100 or even 1000 mistakes are made. The effect starts to add up until you realized you wasted a ton of your time or gotten yourself stuck in a bad mood.

I believe there is a huge loss in efficiency and happiness due to our natural inability to optimize every single little decision-making problem we face during a typical day. While a wasted minute here and an inconvenience there may not seem like a big deal, the effects start to add up over time and across people.

Why do we make these mistakes? The most obvious barrier is that, often, the potential benefit of ‘solving the optimization problem’ for these little, everyday decisions is less than the mental cost of that calculation – that’s why you skip it in the first place. We are efficient and instead of solving each problem individually we use heuristics to answer them, usually by making assumptions or acting out of habit. But our assumptions aren’t always right, and our habitual actions aren’t always optimal. In fact, a lot of work in applied economics involves testing whether or not assumptions are true or not. In many of our daily optimization problems, the tools of economics could help us make better choices. We could learn, for example, under what conditions it would be better to take the drive-through rather than park and go inside.

Professional economists don’t worry about helping people solve the little decisions – usually they have their sights set on bigger fish, and rightly so. But the questions that are interesting to non-economists are about the little problems they face all the time, and I think that by framing and solving these problems with economics we could make a lot of people’s lives more efficient and happy. Using economic methods, we can test the validity of certain assumptions and, from there, dispel inefficient or unsupported behavioral prescriptions.

The cost to figure out many of these assumptions – the cost of performing research, collecting and analyzing data, along with disseminating the information – is too high for any one person to do. The extra minute or two saved by a more efficient decision isn’t very useful if it takes days or months of your time to figure it out. But if someone else provided you with the information, then you could costlessly incorporate it into your decision making and better optimize. And if enough people used the information, we might see a net overall benefit – the months of research might save, in sum, years of time, minute by minute, person by person.

Enter ThoughtBurner. Part of what this blog and website hopes to accomplish, in addition to providing unique economic commentary on variety of issues, is to incur the cost of research and analysis for the ‘little things’ in life, the daily optimization problems. Drive-through or park and go inside? Let’s do observational research and look at the evidence. How drunk should you really be before you hit your beer pong skill level peak? Time to record stats on beer pong players. Should I like my friends Facebook post or not? These are the sorts of questions that people ask themselves all the time, but that have never been inspected with the rigor of formal economic analysis, which is exactly what ThoughtBurner research is meant to provide.

My hope is that people will learn something useful from ThoughtBurner, whether it’s the answer to an everyday optimization problem they’ve faced or a new way for them to frame an old issue. I figure that, while I don’t know quite enough about economics to answers the ‘big questions’ (yet), I can probably handle the smaller ones, and I hope to do it in a way that is practical, entertaining and useful for everyone.

__________________________________________________________________________

PDF Version