Thought: Speeding Quotas In Austin, Texas


By Kevin DeLuca


Opportunity Cost of Reading this ThoughtBurner post: $1.61 – about 7.3 minutes

This is the last post I’ll make about speeding, I swear (for now). You can read about the driver’s problem or about the government’s problem in my earlier posts about speeding. I wanted to share one more interesting discovery I had made while investigating common speeding myths – hopefully it will help you better optimize you speeding behavior. I investigate the claim that the City of Austin Police use speeding ticket quotas, and how to optimally adjust your behavior to account for the effects of these possible quotas.


I have often heard from friends and family that you shouldn’t speed at the end of the month because police officers will be trying to finish their month’s quota of speeding tickets. Also, apparently there were a few police forces around the country that got caught using speeding ticket quotas in the past, including cities in Texas[i]. Even though speeding ticket quotas are illegal in Texas[ii], it may be the case that police departments have implicit quotas or “performance standards” that effectively create the same behavior that we would expect from explicit quotas.

The assertions, then, are that 1) police are required, either explicitly or implicitly, to give out a certain number of speeding tickets each month and 2) as each month comes to an end, police give out more tickets to make sure that they meet their quota.

Rather than continuing to speculate, I turned to the data. Using the City of Austin’s open data website[iii], I gathered the details of every traffic violation given out by city police in fiscal year 2013-2014 (data available at the end of this post). After removing all non-speeding traffic violations, there were 67,606 tickets given out for a number of different types of speeding violations:


This amounts to about 1 for every 4,700 Austinites, or 185 speeding tickets every day on average. Included in the data set is the date of each violation, which is exactly what we need to test whether police actually try to catch more speeders at the end of each month, as we might suspect.

First, I simply plotted the average number of speeding tickets for each day of the month and looked to see if it increased as the end of each month draws near. If more tickets were given out at later days of the month, it would provide suggestive evidence of quota behavior in Austin police. Here are the results:


It looks to me like the number of tickets given out actually decreased as the month came to an end. Visually, it doesn’t appear like there are quotas – most days, police gave out between 150 and 200 tickets with no clear increase as the month came closer to an end. It was a bit suspicious to me, however, that there seemed to be a higher-than-average number of tickets given out between the 14th and 22nd of the month, with a sudden and somewhat lasting decrease starting on the 23rd

I was also worried that there might be day of the week effects – maybe the police department had certain days of the week where all of the cops would spend a large part of their time trying to catch speeders in the city. To visually test for this, I plotted the total number of tickets given out on each day of the week:


While there appeared to be no single days where police gave out more tickets (except maybe Monday?), it looks like police gave out far fewer tickets on Sundays compared to other days. Because of this pattern, I decided to add in controls for day of the week and day of the month effects just to be safe.

First, I calculated the average number of tickets given out on each day-of-week-day-of-month combination. For example, there were three Tuesdays that were also the first day of a month in fiscal year 2013-2014, and there were a total of 373 tickets given out on those three days, which made for an average of 124.33 tickets on Tuesday-the-1sts. Using these calculated averages, I ran a regression of average number of tickets on day of the month and day of the week controls:


The regression results confirm what we suspected from the visual inspection of the data. As the day of the month increases (as we get closer to the end of each month), there was no significant effect on the average number of tickets police gave out (negative, insignificant coefficient for DayofMonth). On Sundays, cops gave out about 100 tickets less than on Saturday (Saturday is the comparison day since it was dropped) and the effect is highly significant (coefficient on Sun). Also, cops gave out about 40 tickets more on Mondays, and this effect is marginally significant as well (coefficient on Mon)[iv]. Because there is no information about the number of people who drive on each day of the week, we can’t tell if these day of the week effects are because cops act differently or because people drive differently over the course of a week. For example, if half as many people drive on Sundays then that could explain why we see about half as many tickets.

While the day of the month variable was insignificant in the regression above, it doesn’t necessarily rule out the possibility of speeding ticket quotas. It just means that there probably aren’t ticket quotas due at the end of each month. If Austin City police had to meet their quotas by the middle of every month – maybe by the 22nd, for example – then the regression above wouldn’t be able to detect the expected pattern in the average number of speeding tickets given out.

To test for quotas that are “due” on different days of the month, I created artificial “quota months” where the suspected due date of the quota was made to be the last day of the month. For example, there was visual suggestive evidence that the 22nd might be the last day of a quota, so I changed the 22nd to be the last day of the month (31st), the 23rd to be the first day of the month (1st), and so on for each day of the month. Using the new, modified dates as the day of the month variable, I reran the regression with controls to see if the pattern of the average number of tickets given out conformed to expected quota behavior for any of the hypothetical quota months (i.e. if there was an increase in tickets towards the end of the new quota months). I actually did this for each possible monthly quota due date, but rather than showing you all 31 regression results I put them in a nifty little table that summarizes the important parts:

The suspected end date column indicates by which day of the month I am assuming police officers had to complete their speeding ticket quota. For example, the expected end date of 22 means that I assume that police officers had to reach their quota by the end of the 22nd of each month. The effect size column indicates how big the effect is if I was correct about the quota end date. So, if I was right that the quota was due on the 22nd, then police gave out 1.434 more tickets on average for each day closer to the 22nd (starting from the 23rd of the month before). The highlighted rows are statistically significant effects at the 5% level.

The 21st, 22nd, and 23rd are all prime suspect dates for a quota end date. If the City of Austin police force is using a quota system, police most likely need to meet their quota by one of these dates. The 13th is essentially the least likely quota end date – if this were the actual quota end date, then police would actually be giving out about 1 ticket (significantly) less per day as their quota end date drew closer, which we think doesn’t really make sense.

I want to be careful with interpreting the results – this does not prove that police officers are using quotas, it just suggests that if they are, then their quotas are probably due between the 21st and 23rd of each month. The assumptions are: police officers do in fact have quotas, and police “procrastinate” in the sense that they wait until the quota end date is close to finish collecting their quota. Given that these are true, then we would see a higher number of tickets given out as the quota end date approaches. A higher, significant effect size means that police more strongly follow this expected behavior given that the suspected end date is in fact when a monthly quota is due. The effect size, then, is sort of like the “chance that the quota ends on that date”. Keeping this interpretation in mind, it becomes clear that if a quota exists then it’s probably needs to be met sometime between the 21st and 23rd of each month, and almost definitely is not due around the 13th. This is apparent visually as well:


Another word of caution: As I mentioned before, I could not control for the number of drivers on each day. While I can’t think of a reason why more people would drive (or be more likely to speed) on these days of the month (and therefore more tickets would be given out), we can’t rule out the possibility that driver behavior is causing the trend rather than police quota behavior.

Last, I just want to point out that even if we were right about the existence of speeding quotas and police behavior and the day that quotas were due (the 22nd, say), the effects are rather small – only about 1.5 tickets more per day out of all police officer and drivers. There are 2,300 police officers in Austin[v] and the city has a population of 885,400[vi], so 1.5 more tickets a day is a practically small effect.

Caveats aside, according to the best evidence I have, it appears that Austin police do not give out more tickets at the end of each month. Rather, they give out more tickets during the 3rd week of each month, between the 15th and 22nd. If speeding ticket quotas do exist, explicitly or implicitly, then they are most likely to be due around the 22nd of each month. For all of you drivers out there who are worried about the increased probability that you will get a speeding ticket due to ticket quotas, you should all speed a little less during the 3rd week of each month, and maybe a little more during the last week. Also, there seems to be some evidence that you are less likely to get a ticket on Sundays, and more likely to get a ticket on Mondays.

Which means that Sunday-the-23rds are probably the best days to speed, from the driver’s point of view. The most recent Sunday-the-23rd (that I have data for) was in March of 2014, and only 32 speeding tickets were given out. Compare that to the Austin City average of 185 tickets per day. August 23, 2015 is the next Sunday that is also the 23rd of the month – this day would probably give you the best chance to speed without getting caught. Monday-the-22nds would probably be the day you would be most likely to get a speeding ticket – June 22nd, 2015 is the next of these. Make sure you plan your speeding accordingly.





[iv]An F-test on the day-of-the-week coefficients show that the average number of tickets given out on Tuesday through Saturday are not significantly different. This test and its results are included in the Stata .do file.




PDF Version

Excel File

Notice About Stata Files: Because WordPress does not allow upload of .do or .dta files (for Stata), I have uploaded the files as .doc (s). If you want to use the files as .do or .dta files, simply save the file as a .doc and rename the old extension to the appropriate, new extension.

2014dataraw (rename to .dta)

QuotaBehaviorTest (rename to .dta)

SpeedingDoFile (rename to .do)

Thought: At What Speed Should You Speed?


By Kevin DeLuca


Opportunity Cost of Reading this ThoughtBurner post: $2.28 – about 10.4 minutes

While I’m sure that many of you readers are outstanding citizens who would never ever even dream about ever breaking the law ever, I know that some of you are natural-born rebels and straight‑up gangsters that look at the list of minor traffic violations[i] and say, “Nah, Imma do me.” Speeding ensues.

Most people will not drive faster than they feel comfortable driving, but the prevalence of speeding tickets suggests that often times people’s maximum comfortable driving speed is above the set speed limits. According to the info graphic on this webpage[ii], 20.6% of all drivers will get a speeding ticket over the course of a year, costing them an average of $152 per ticket. And about 41 million people get speeding tickets each year cumulatively bringing in over $6 billion in revenue from fines.

If people are making an appropriate cost-benefit analysis when they decide to speed, then these facts mean that the time saved from speeding over the course of a year is worth at least $6 billion dollars. But I don’t think that most people are actually doing any calculations before they make their decision to speed, so there is a large potential for inefficiency – I suspect people are speeding in a non-optimal way.

There are costs and benefits to speeding, and the prevalence and price of speeding tickets are non-trivial, so ThoughtBurner is here to help you out. In Speeding Part 1, I will attempt to answer the question: at what speed over the speed limit should you, the driver, speed? In Speeding Part 2, I will help the government out by helping them decide how expensive a speeding ticket really should be (sorry everyone).


So, you want drive somewhere and you’re wondering if speeding there to save some time is worth risking getting a speeding ticket. How do you decide? Before you can make an informed decision, you need to know a few things:

  • The time you will save by speeding
  • The (subjective) value of the time you will save by speeding
  • The probability that you will get caught speeding
  • The cost you will have to pay if you get caught speeding

The time you will save by speeding is fairly easy to calculate; distance divided by (speeding) speed, minus the distance divided by (non-speeding) original speed. The trick is, for each mile per hour over the speed limit you travel, you don’t save the same amount of time. An easy example: say it takes you 10 minutes to get somewhere going 20 mph. If you go 40 mph, you’ll get there twice as fast – in 5 minutes, which means you’ll save 5 minutes of your time. If you go 60 mph, you’ll get there three times as fast – in 3.33 repeating (of course) minutes, which means you’ll save 6.66 repeating (of course) minutes of your time. The first 20 mph over the limit saves you 5 minutes, but the next 20 mph over the limit only saves you an additional 1.67 minutes.  If you do some calculations, you can see that the higher the speed limit, the less valuable speeding is (in terms of time saved). But in general, we can show the relationship graphically, holding distance traveled constant:

Time Saved vs Speed

The subjective value of the time you will save by speeding is a bit trickier. It probably depends on a lot of different things – like how busy of a person you are, how late you are already running, how much you subjectively don’t like driving, etc. – and so I can’t know the actual level of your subjective value of the time you will save. But, I think it is safe to assume that the subjective value of time saved, or the utility of time saved – U(s) – is diminishing. For example, imagine that you are running 5 minutes late and you are deciding whether to speed to save 5 minutes or not speed (save 0 minutes). Those 5 minutes could be very valuable. Now imagine that you are 5 minutes late and you are deciding whether to speed to save 35 minutes or 30 minutes (yeah, you’re going, like, super-fast). When you are already saving a lot of time from speeding, e.g. 30 minutes, then saving an additional 5 minutes isn’t really worth much. Even if the utility from time saved wasn’t diminishing, actual time saved is diminishing as you go faster and faster, so the value of time saved will be diminishing as well. We can represent this graphically too:

Utility vs Speed

The probability of getting caught is another very tricky number to estimate. I spent a lot of time thinking about more sophisticated models, where the probability of getting caught is a function of the distance you are traveling, the speed at which you are going, and other factors, but I think that for our purposes the best thing for us to do is try to estimate the probability of getting caught per trip. I will provide actual estimates later, but for now let us call this constant probability of getting caught “p”. We can expect, given that only 1/5th of drivers actually get speeding tickets every year, that p is probably small.

Last, we need to know the penalty for getting caught. I will consider the case where there is a base fee for speeding, plus a fine that increases depending on how fast over the speed limit you were traveling, as is the case in Travis County (Austin, Texas)[iii]. This is not actually super common – I will consider alternate fine schedules later, but this particular case leads to an easier solution. The cost of the speeding fine, then, is linearly related to how fast you are speeding:

Fine vs Speed

If multiply the graph above by p, we get a graph of the expected costs of speeding; the amount you have to pay if you get caught weighed by the chance that you actually get caught.

Expected Costs vs Speed

We can combine all this information to solve what I will call the driver’s maximization problem, which is: maximize the benefits from speeding minus the expected costs of speeding. More precisely, the problem is:


Where U(s) is the driver’s subjective value of time saved (Figure 2) and E[C(s)] is the expected cost of speeding (Figure 4). Notice that the driver is choosing at what speed to drive which, in this model, will determine the value of both U(s) and C(s).

I will now consider two possible types of drivers: Punctual Perry and Lackadaisical Lucy.

  • Case 1: Punctual Perry

Punctual Perry doesn’t like missing out on anything. So, he always plans ahead and makes sure to leave early whenever he has to drive anywhere. Speeding and saving time doesn’t really give him much value, since he’s never really rushed for time. When Punctual Perry plots his driver maximization problem, it looks like this:

Punctual Perry

Notice that Punctual Perry never subjectively values his time saved more than the value of the expected cost of a speeding ticket for any given speed. This is because Punctual Perry is true to his name (punctual); he doesn’t need to save time since he’s always on time, so saving more time isn’t very valuable to him. He is more worried about the expected cost of the hypothetical ticket than saving a few extra minutes.

We don’t even have to do any math (yay!) to see what the solution to Punctual Perry’s driver maximization is: don’t speed. The expected costs are always higher than his benefits, so any amount of speeding leads to negative values for the driver maximization problem. If he doesn’t speed, there are no benefits but also no chance of getting caught speeding, and since zero value is better than negative value, Punctual Perry just never speeds.

  • Case 2: Lackadaisical Lucy

Lackadaisical Lucy is a more interesting case. She is typically late to things, which means that time saved by driving a little faster is more valuable to her compared to time saved by Punctual Perry. When Lackadaisical Lucy plots her curves for the driver maximization problem, it looks like this:

Lackadaisical Lucy

For Lackadaisical Lucy, there are speeds at which the value of the time saved is greater than the expected costs at that speed. If Lackadaisical Lucy chooses the right speed, she can maximize the benefits from speeding conditional on expected costs. But what speed is the right speed? It is the speed at which the distance between U(s) and E[C(s)] is the biggest.

Refer back to the driver’s maximization problem. In order to maximize, Lackadaisical Lucy can simply take the derivative of the driver’s problem with respect to speed and set it equal to zero. Doing so results in:


In words, this means that Lackadaisical Lucy should choose a speed, s*, where the additional benefit of speeding a little more is equal to the additional cost of speeding a little bit more (marginal benefit equals marginal costs). Graphically, this speed is shown on the graph at the point where the slope of Lackadaisical Lucy’s utility curve is equal to the slope of the expected cost curve:

Lackadaisical Lucy FOCs

The speed where the slopes are equal is the optimal speed that Lackadaisical Lucy should drive in order to maximize her utility. By choosing speed s*, Lackadaisical Lucy is maximizing the difference between the benefits of speeding and the expected cost of speeding, for some given distance. This is good, because Lackadaisical Lucy is a rational human being who wants to maximize her utility.

In summary, this model predicts that some people (or all people in some circumstances) will decide not to speed when the benefits from speeding never exceed the expected costs (Punctual Perry). And people who decide to speed (Lackadaisical Lucy) should drive at a speed where the marginal expected cost of speeding is equal to the marginal subjective benefit of speeding.

So, if you find yourself being a Punctual Perry, then not speeding is the right choice. But what if you are being a Lackadaisical Lucy? How do you know how fast to speed? What is the actual marginal cost of speeding?


Using the simple model developed above, I will now provide some empirical estimates to help you all solve your own driver maximization problems.

I’m guessing that, in real life, many people are Lackadaisical Lucys in the sense that there is a point where they value saving time more than the expected cost of speeding, though not necessarily just because they are always late. For example, they could just hate driving a lot so that speeding to drive less is worth the risk. Regardless of their reason, these types of drivers can use the model developed above to determine their own individualized solution to the driver maximization problem whenever they want to go somewhere.

The subjective value of time saved is all about you guys, and it could vary pretty widely across individuals (Punctual Perrys vs. Lackadaisical Lucys). Also, remember that the amount of time saved – and therefore your utility gained from it – depends on the distance you are traveling. But the probability of getting caught and the costs of traffic fines faced by everyone are the same, so I’ll first focus my attention on providing some guesses of these values.

Start with the statistics from above that says 20.6% of all drivers get a speeding ticket each year. The most recent estimates from the 2009 National Household Travel Survey[iv] put the average daily number of vehicle trips per driver at about 3 a day (see Table 3). This means that each driver makes (3*365) 1095 car trips every year. There is a 20.6% chance that at least one of those trips will result in a speeding ticket. Which also means that there is a 79.4% chance that none of those trips will result in a speeding ticket. If we let p equal the probability of getting a speeding ticket per trip, then it follows that (1 – p) is the probability of not getting a speeding ticket. Then:


The left hand side is the probability of not getting a ticket 1095 trips in a row, and the right hand side is the observed proportion of people who don’t get a speeding ticket each year. Solving for p gives:


That is, the probability of getting a ticket per car ride is one minus the 1095th root of 0.794, which comes out to be about 0.0002. While this may seem really low, it is.

People drive a lot, and considering that drivers also probably have strategies for avoiding speeding tickets (e.g. don’t speed on certain roads where cops hang out), it is not that surprising to me that the probability of getting caught is so low. You actually have to be pretty unlucky to get a speeding ticket.

Using the fine schedule from Travis County, we know exactly how much the speeding ticket will cost you at any given speed over the speed limit. It is a $105 base fine, plus $10 per mph over the speed limit you get caught speeding[v]. Mathematically, this means that:


Adjust this by our newfound predicted probability p, and you get:


Remember that the maximizing condition is when marginal benefits are equal to marginal costs:


Taking the derivative of E[C(s)] with respect to s:


And substituting in gives:


This means that, if you speed, you should speed at s* where the marginal benefit of speeding is equal to 2 tenths of a penny. Which is almost nothing. So, your optimizing speed will be very close to the speed at which you will no longer gain any benefits from increasing your speed. For example, based on my own previous estimates of opportunity costs[vi], the value of $0.002 is approximately equal to the value of 0.54 seconds of leisure time.

Imagine how happy you would be if I told you that you would have an extra 0.54 seconds today to do whatever you want! If you would be at least that happy by speeding a little faster, then you should do it.

The implication of this is that people acting optimally (in Travis County) should basically just completely disregard speed limits and drive at the fastest speed they feel comfortable driving – well, marginally below it. Only go 4.9999 mph over the limit vs. 5 mph over the limit.

The results initially surprised me. People who aren’t making any optimizing calculations are also probably getting really close to choosing the correct speed the model says they should speed. Since the probability of getting a speeding ticket per trip is so low, the expected costs are also very low and the marginal cost of increasing your speed is even lower (close to zero).  So just keep increasing your speed until the marginal benefit is close to zero as well. Graphically, it would look something like this:

Empirical Graph

Basically, yeah you were most likely already doing it right. Speed at the max speed you are comfortable driving, because the probability of getting caught is so low per trip that the marginal expected cost of speeding is less than half a penny. You were already acting optimally! Wow, brilliant.

Even if you change some of the assumptions that would lead to higher estimates of p, the results are essentially the same. For example, implicit in our estimate of p above is that people speed on every trip they make throughout the year and have a non-zero chance of getting a speeding ticket. However, it seems likely that this is not the case – people don’t always speed no matter what. So, let’s be extremely generous and assume that people only speed one days-worth of trips (3 trips) per month. Then we would have:


And, substituting in the new p with our optimizing conditions give:


Which means that you should speed until the marginal value of speeding is worth only $0.064 in time saved which, again based on my own previous estimates of opportunity costs, is about equal to the value of 17.5 seconds of leisure time. Again, imagine how happy you would be if I told you that you would have an extra 17.5 seconds today to do whatever you want! If speeding a little faster makes you at least that happy, you should do it (with the above specification).

In general, the strategy of just speeding at whatever speed you want is probably a very close approximation to the optimizing strategy. I originally thought I would end this post by telling everyone to speed less, but instead I think that in the spirit of ThoughtBurner’s mission I have to encourage you all to ignore speed limits. What have I done.


The purpose of speed limits and speeding fines is to deter people from speeding. In this model, drivers acting optimally will basically ignore speed limits. This makes me wonder whether the City of Austin has set traffic fines high enough to deter speeding. For example, other states have speeding fines of up to $1,000[vii]. Perhaps there are other incentives that determine the schedule of speeding fines, such as revenue collection.

So, imagine now that you are a city government planner and you are tasked with determining how expensive speeding fines will be. How do you know you have chosen the right penalty amount? If drivers are basically ignoring speed limits right now because their expected costs are so low, then in order to get them to stop speeding either you need to increase the probability of getting a speeding ticket (hard way) or increase speeding fines (easy way). This will be the problem I will solve in Speeding – Part 2.










 PDF Version

Thought: Countries With Names That Sound Really Free Aren’t Actually Free


By Boyd Garriott


Opportunity Cost of Reading this ThoughtBurner post: $0.73 – about 3.3 minutes

What are some buzzwords to indicate that a country is free? A democracy? A government by the people? How about a republic?  If these are indeed the case, then it would seem that the freest country on the planet would have to be none other than the Democratic People’s Republic of Korea. For those that are less geographically inclined, that’s North Korea. For those that are even less geographically inclined, that’s the bad Korea.

So what gives; why does the most unfree country have such a free-sounding name? North Korea isn’t alone in being a country notoriously bad on civil rights with a very “free-sounding” official name. Think Democratic Republic of the Congo or People’s Republic of China. Could it be that countries are compensating for their definitive lack of freedom with a “free-sounding” name? A normal person would probably take this a face value and have a laugh at the irony, but we’re not normal people at ThoughtBurner, so I’ve done some statistical analysis to figure out whether countries with free-sounding names are actually more oppressive.

First off, to figure out how free or oppressive a country is, I used a dataset[i] from Freedom House of around 200 countries that are rated based on their political rights and civil liberties. I downloaded the 2015 dataset, aggregated those two numbers and then converted the result to a scale between 0 and 100 with 0 being not free at all (think North Korea) and 100 being totally free (think USA). This allows us to calculate percentage changes in freedom.

Second, to figure out how “free-sounding” a country name is, I used the formal names of countries (as opposed to the short names; think Democratic People’s Republic of Korea vs. North Korea) which are listed on Wikipedia[ii]. I then gave countries a point for every term they used that seemed to endorse freedom: any variations of “Republic”, “Democracy”, or “People’s”. A score of 0 (Canada) indicates that a country’s name makes no endorsement of freedom while a score of 3 (back to our friends in North Korea) indicates that a country’s name sounds like the preamble to the Constitution.

I then regressed these two numbers, and I found some interesting results. For every “freedom-endorsing” term in a country’s title, its citizens can expect to be 14% less free at a statistically significant level. To give you an idea of what that means, check out this chart below:


That’s right; as a country gets freer in name, it gets less free in reality. The average freedom for a country without any free-sounding descriptors is 69%, better than the world average of 61%. However, the average freedom for a country with three free-sounding descriptors – including the Democratic People’s Republic of Korea – is an appalling 11%. As a matter of fact, every country with that many free-sounding descriptors is classified as “not free” by Freedom House.

Even countries with just “Republic” in the name are, on average, 10% less free than countries without free-sounding descriptors. By the time that jumps to something like “Democratic Republic”, we’re talking 17% less freedom!

If that wasn’t ironic enough, consider this: countries with any variation of “king” or “kingdom” in their name are actually, on average, more free than countries with any freedom-sounding descriptor. That’s right: countries that explicitly endorse monarchy in their names are freer than those that explicitly endorse freedom. Granted, many countries with “king” in their name are modern European democracies like the United Kingdom and the Kingdom of Demark, so that explains some of the irony.

There are three important lessons to take from this.

  1. North Korea is the bad Korea (again, this one is more of a reminder for the less geographically inclined).
  2. Countries that sound really free usually aren’t that free.
  3. Yep, countries compensate for being terrible places by having nice-sounding names.

Wonky Stuff:

Regression of Freedom Score on Number of Freedom Descriptors


Important Statistics


Further Explanation of the Methodology

I came to the conclusion to use the freedom descriptors that I did after quite a bit of thought. First off, the words “Republic”, “Democracy”, and “People’s” are pretty bold adjectives that describe a form of government that represents the needs of free citizens. It should also be noted that I used the search string “Democr” to get any name that described itself as “Democratic” as well. Similarly, I searched the string “King” which also included “Kingdom”.

Next, I actually put quite a bit of thought into choosing my freedom descriptors, so here’s some explanation on other contenders that I didn’t count. “United” seems to be an obvious contender, but it’s not a word that actually describes a free society; citizens under the rule of a dictator are “united”, but they certainly aren’t free. “State” also came up pretty often in the dataset, but a state is just a sovereign territory that doesn’t make any claim as to the type of government it employs. On similar grounds, I rejected using “Principality” or “Commonwealth”. “Federal” came up, but that describes multiple states under a central entity – nothing about freedom.

I used country’s official English names because… well… I don’t speak like a hundred languages.

Lastly, to be clear about the graph, the labels on the bottom are illustrative but still accurate. The true labels would be “0 Freedom Descriptors”, “1 Freedom Descriptors”, etc. However, I took the liberty of putting common country titles that illustrated the amount of “Freedom Descriptors” they corresponded to. For example, “Democratic Republic” is usually what a country with freedom descriptors looks like, but there are exceptions such as the People’s Republic of China. There are two freedom descriptors, but it doesn’t fit neatly into the graph. In the end, however, I think it presents the information fairly.

Boyd Garriott is ThoughtBurner’s Chief Contributor. Boyd received his undergraduate degree in economics from the University of Texas at San Antonio. He currently lives in Washington D.C. and will be attending Harvard Law School in the fall.






PDF Version

Thought: Cinderella’s Incredibly Small Foot


By Kevin DeLuca


Opportunity Cost of Reading this ThoughtBurner post: $1.58 – about 7.2 minutes

I went to see the live-action remake of the Disney classic Cinderella the other day, and as I watched the prince’s men search all across the countryside for poor old Cinderella, it occurred to me that Disney characters, and especially the Disney royalty, tend to be a bit inefficient. Cinderella’s fine prince was the perfect example of sub-optimal Disney behavior. He wanted to find Cinderella after the royal ball, so he ordered his men to take the left-behind glass slipper and try to put it on every single maiden in the kingdom. This is probably the most inefficient way to find someone that I have ever heard. Also, this plan only would have worked under very specific, and fairly unlikely, conditions, which I will explain below.

Of course, it is up to ThoughtBurner to advise the creative Disney “imagineers” on ways for them to not only provide the impressionable children of the world with lessons about love, life, and happiness, but also to teach the value of efficiency in the most economic sense. And yes, for those of you who were unaware, implicit in ThoughtBurner’s mission to improve efficiency and happiness in people’s everyday lives lies the responsibility of also improving the efficiency and happiness in imaginary people’s everyday lives.

Cinderella’s Incredibly Small Foot

The prince’s plan to find Cinderella was, as I mentioned above, to take the glass slipper around the countryside and try it on every woman in the kingdom. Whoever the slipper fit would be the woman that the prince would marry. The prince is making a huge assumption here: that no other woman in the kingdom has the same foot size as Cinderella. In the movie, it shows the prince’s men trying the glass slipper on many different women, and it never seems to fit! And, for most of the women the shoe is too small – this implies that Cinderella’s foot is very small. How small would Cinderella’s foot need to be in order for the prince’s assumption to be correct, you ask?

First, we need to know the distribution of women’s foot sizes. This is actually kind of hard to find, and I could only really find data on the distribution of Japanese feet online[i]. Because Cinderella isn’t Japanese, instead I found stats on female shoe sales [ii]– the data includes the quantity of each shoe of each size that was sold in the United State in 1998. If we assume that women buy shoes that fit their feet, then the distribution of the sizes of female shoes sold should reflect the distribution of women’s foot sizes fairly accurately. Below is the distribution of female shoes sold by size:


Female shoe sales by size seems to be normally distributed, which is great for us because we know a lot about the properties of things that are normally distributed (makes using statistics really easy). As you may have noticed, all of the half-sizes are all systematically lower than their whole size counterparts, which makes the distribution less nice. I’m guessing that this has a behavioral explanation (people like to buy whole sizes more than half sizes, for some reason? Update: after I published this I heard from friends that some women’s shoe stores/brands don’t sell half sizes, which is a more likely explanation for the pattern we see). If you plot only the sale of whole sizes, it looks even more like a normal distribution:


Alright so, now that we know the distribution of foot sizes, we can figure out how small (or big) Cinderella’s feet needed to be in order for the prince’s plan to work. Using the same numbers that I used to create the graphs (see data link at end), I calculated the average female foot size to be 8.076 with a standard deviation of 1.468. (These are women’s shoe sizes, not inches).

Now, before we can calculated the size of Cinderella’s foot we also need to know how many people were in the prince’s kingdom. Let’s first make a few more assumptions that will favor the prince’s plan. First, assume that the prince does not actually try the slipper on every woman in the nation; instead, he stops as soon as he finds Cinderella. Also assume that the first and only woman who fits the glass slipper is actually Cinderella. The prince doesn’t know anything about how early or late in the process of shoe-trying-on Cinderella will be fitted, so we’ll assume that every woman has an equal chance of being the next woman to try the shoe on. This means that the prince is expected to try to put the shoe on half of the women in the kingdom before he finds Cinderella.

Perhaps not surprisingly, there doesn’t seem to be much data on the population size or demographic characteristics of magical fairytale kingdoms. I will use the next best substitute, the imaginary medieval worlds of gamers, on which there apparently has been research done on the populations of towns, villages, and cities or kingdoms. Thanks to S. John Ross and his page with some guesses of the population density of kingdoms (which seem to be based on real cities[iii]), we can provide a range of estimates of the size of the kingdom that Cinderella lived in, and according to that we can see how small her foot needed to be in order for the prince’s plan to work.

Remember that about half of the population will be women, and that the prince only has to try the slipper on half of these women before he is expected to find Cinderella. In the chart below, I calculated how many women need to go through the slipper test before Cinderella would be found. Next, I found which percentile Cinderella’s shoe size would need to be in order for her to be the only person with that shoe size in that kingdom. Next, I converted the percentile to a z-score, and last I calculated how small (or big) her shoe size would have needed to be by subtracting (or adding) z-score-many standard deviations to the average shoe size.


As you can see from the chart, even with a ‘kingdom’ as small as 300 people, Cinderella’s shoe size would have to be in the bottom 1.3% of the population, and she would be a size 4.82. To put this into perspective, a woman’s size 4 is barely over 8 inches[iv], and most vendors don’t sell below a woman’s shoe size 4. If the kingdom had a population of around 50,000 (about one-sixteenth the size of Austin[v], maybe not unreasonable for a “kingdom”), Cinderella would have to have a shoe size in the bottom 0.007% of the population, which corresponds to a size 2.5 (which I think means she would have to buy kid shoes). If Cinderella had lived in New York City, with a population of 8.4 million[vi], she would need a shoe size of 0.88. While possible that Cinderella just had really really small feet and the prince just had a really really small kingdom, it seems highly unlikely that the prince’s plan would have worked. On top of that, the prince’s men are also going to be tied up traveling the kingdom. Assuming that these men are compensated for their work by the prince, I’m sure that the people of… wherever… would not be happy to know their taxpayer money was being wasted on such an inefficient manhunt.

I will now prescribe a much simpler way to find Cinderella, and show you that he could have found Cinderella much more quickly for only a fraction of the cost of his original method. Obviously, the prince should not try the shoe on every woman. All I suggest to the prince is that he test only blonde females in his kingdom.

(Note: In the new movie, Cinderella always has blonde hair. I was looking through pictures of the old Disney film online, and Cinderella’s hair color seems to change between blonde, dirty blonde, orange, and brown, and I can’t tell if this is representative of Cinderella’s true hair color or if we have another black-blue white-gold dress thing going on. All of the “modern” pictures of Cinderella[vii] go with a blonde-haired version. And according to this Cinderella wiki[viii], Cinderella’s hair color is “Strawberry-Blonde.” I will therefore continue with the assumption that Cinderella had blonde or a darkish blonde hair color, though I admit that if this assumption fails then my strategy would need to be slightly modified.)

How many women in the kingdom have blonde hair? This would normally be impossible to know, except for the fact that the prince actually invited every woman in the kingdom to his ball. This is perfect for us, because we have the entire population of interest trapped inside the palace for random sampling. Also, conveniently, Cinderella was the last one to arrive to the ball so we know that there is no selection bias (if you were worried that hair color was correlated to promptness or something). So, let us take a random sample of women from the royal ball and see how many are blonde. Here is a snapshot of the ball from the new film:


Notice anything? Including Cinderella, I count only 4 (possibly) blonde women out of the 19 pictured. Since this is a random sample of all of the women in our population of interest, we expect the mean from the sample to equal the mean of the population. So, in this kingdom only about 4 out of 19 women, or 21% of women, should have blonde hair. (If you use pictures of the ball from the old animated film, even less of the women are blonde.)

Simply by testing only blonde women, the prince would have spent only 21% of the original total cost if the rest of his plan had been exactly the same. In other words, I would have reduced his costs by about four-fifths with a super obvious modification to his strategy. To save even more on costs, he could have put out a royal decree saying something like: “I will compensate travel costs for any blonde woman who comes to my palace if she fits into this glass slipper.” Then, only Cinderella and other blonde women who think they might fit into the glass slipper would have any incentive to go the palace to try on the slipper. If Cinderella is the only one who fits the slipper, then the prince will just have to pay whoever takes Cinderella to the palace for the travel costs. If other women fit the slipper, then the prince will have to use something other than shoe size to identify Cinderella and may have to pay additional costs of transporting false positives. In either case though, I suspect it would be less expensive than having his men travel the countryside testing every blonde woman one at a time. (I know that this wouldn’t have actually worked in the movie since Cinderella is locked in the attic or whatever, but the prince’s original plan wouldn’t have worked either for the same reason. My method still has much lower expected costs.)

Also, by using my plan, the prince would have found Cinderella in about one-fifth the time it took him in his original plan, assuming no fixed time costs. If he wanted to find Cinderella as quickly as possible, he could have sent out a single man to each household with instructions to bring any blonde women in the household to the prince’s castle for a shoe fitting. Rather than taking the glass slipper to each house, just bring all the possible Cinderellas to the palace in one fell swoop and test the slipper on them. I’m not sure if this would be cost effective, but compared to his original plan he could probably use all the money I saved him to justify any additional costs of getting his Cinderella sooner. And isn’t that more romantic too? What princess wouldn’t love a prince who in addition to doing everything he could to find her as quickly as possible also did it while minimizing expected costs?! I can see the headline now: “Kingdom’s Tiniest-Footed Woman Marries Cost Minimizing Prince In Record Time!”



[ii] Footwear Impression Evidence – Page 191:









PDF Version

Thought: The “YOLO” Effect – Inappropriately Discounting The Future


By Kevin DeLuca


Opportunity Cost of Reading this ThoughtBurner post: $1.46

You can hear the call coming from inside the gated frat house parties across college campuses every Friday night. The Lonely Island called it “the battle cry of a generation.”[i] Even in everyday conversation, hesitation, prudence, and caution are all greeted with a new, four-letter challenge: YOLO.

The expression “you only live once” (YOLO) is often meant to serve as a justification for risky or unwise behavior. The idea behind the message, however, is stronger than that. Because we only have one life to live – you only live once – we should not give up current opportunities. We should do what we want to do, when we want it, because you might not have the same options later as you do now (since we only live once). Similar phrases have captured this idea throughout history; “carpe diem”, “live life to the fullest”, and “live like there’s no tomorrow.”

Before I speak of the YOLO effect, I’ll introduce the common idea (in economics) of “discounting the future” in choice models. The basic concept behind discounting is that people value the present more than the future. Easy example: would you rather have $10 right now or $10 in 5 years? Most people say $10 right now, since $10 right now is more valuable than $10 in 5 years; you might need that $10 now and it’s not clear that the $10 will be as useful in 5 years. (see here[ii] for more). In order to account for this phenomenon, economists often place some discount factor on future payoffs in optimization problems.

In a very broad and abstract sense then, our total life happiness can be modeled as:


Where each subscript (1, 2, …, T) indicates our happiness in some time period (until we die at period T), and where β is a discount factor between 0 and 1.

A β of 1 would mean that we value each period’s happiness exactly the same ($10 in 5 years is just as good as $10 right now), whereas a β of 0 would indicate that we don’t care at all about future happiness ($10 in 5 years doesn’t make you happy in the slightest).

Everyone is trying to maximize their total happiness – we are people and everyone wants to be happy. How do we do it? It looks like a straightforward problem, but the tricky part is that happiness in one period often affects happiness in future periods. Binge watching Netflix might make you extremely happy in period 1, but in period 2 (the next day, right before your exam) your happiness will could be very low (as you cram for your test). Maybe you can’t pull it off either, and you end up failing your test during period 3, and maybe eventually even failing out of school months later (period 100 or something). Your total happiness would have been much higher if you had suffered in period 1 and 2 – studying instead of watching Sons of Anarchy – and then receiving much higher payoffs in future periods.

How does this relate to only living once? In a sense, the phrase YOLO is an attempt to alter the value of discounting. It suggests that we make the present value of opportunities and payoffs much greater than any future payoffs. Another way to say this is that it places an inappropriate (as in non-optimal) discount factor on future payoffs (and consequences). To an extreme, it suggests using a β of 0 (in other words, discount the future completely). This is “The YOLO Effect.” Rather than basing you decisions on some objective function that takes into account present payoffs and future payoffs, The YOLO Effect changes your optimization problem to the form:


Since β = 0, which simply becomes:


This makes the maximization problem very easy; in order to maximize happiness, simply maximize happiness in the current period, period 1 (H1). Though it may be true that in certain periods, the solution to your lifetime happiness problem is the same as the solution to the current-period maximization problem, it seems unlikely that this would happen very often. The more likely scenario is that you end up getting high payoffs in the current period (high H1) and get lower payoffs in the future (low H2 or even low H100). While it is true that, if you are lucky, the YOLO mentality can lead to extremely high payoffs ( could end up being really, really high), strictly speaking the resulting cumulative amount of happiness from YOLO-based decisions will not be higher than the cumulative lifetime happiness derived from decisions based on appropriate future discounting (which is not the case if β = 0). In other words, the YOLO effect is causing people all over the world to be cumulatively less happy over their lifetime.

In order to counteract the YOLO effect, I propose a new catch phrase: “Live like you’ll live an average lifespan conditional on your particular demographic characteristics!” Not as catchy, I admit, and unfortunately the acronym is a bit complicated (“LLYLAALCOYPDC” doesn’t exactly roll off the tongue). But, if you keep this phrase in mind whenever you make choices, and if you practice saying it enough to yell it quickly at parties and then proceed to do something responsible, you can ensure that you and your friends are maximizing your expected cumulative lifetime happiness – even when the YOLO effect is at its strongest.

(A bit of an aside – The Lonely Island song referenced at the beginning of the article actually takes YOLO to mean something like “Be extra careful with your life, because YOLO.” They meant this sarcastically of course – that is not how the term is used colloquially. Their interpretation, however, implies that you should instead maximize your expected life time. Notice that this is not the same as maximizing your happiness and is, in fact, also a suboptimal strategy (even though you may live longer). A longer life is not always a better life. It does not necessarily lead to inappropriate discounting of future happiness as I’ve described above, but it does cause suboptimal decision making, if your intention is to be as cumulatively happy as possible.)

But maybe a phrase meant to counteract the YOLO effect doesn’t actually help anyone to make decisions (other than to, potentially, stop them from doing something that is obviously suboptimal). The information you need, you might be thinking, is really 1) how long you are expected to live and 2) how much you should discount the future – the ‘appropriate’ discount rate.

Let’s start with the first bit. Using the Social Security website life expectancy calculator[iii], you can enter some information and it will predict how long you’ll live (it predicted that I would live an additional 59.2 years, not bad I guess). The U.S. Census Bureau has some interesting tables[iv] as well to help you figure out how much longer you can expect to live. This[v] website also has a bunch of different life expectancy stats, though I’m not sure how reputable it is based on its underground-esque appearance. Of course there is a lot of uncertainty, so I won’t claim that the ‘average age’ or ‘life expectancy’ is at all close to your expected lifetime – especially if you say “YOLO” a lot. In terms of estimating your expected lifespan, it might be best to ‘start’ at the average life expectancy, which is 78.8 years (for the United States[vi]), and then adjust from there based on your healthy/unhealthy habits, your family history, medical conditions, your ability to make wise decisions and so on.

Once you have a good idea of how long you have left to live, and have come to terms with your own mortality, you now need to figure out how much you ought to discount the future. This, as you may imagine, is not an easy question to answer. There have been studies that empirically try to estimate how people discount the future, and much of the data fits a hyperbolic discounting model (see this[vii] paper, section 4.1 and 5.1) where, generally:


With t being time and  being some constant indicating the strength of the discounting. The implication of hyperbolic discounting is that your discounting factor becomes smaller as you consider different situations that are further away in the future. For example, suppose you are comparing the current period (say t = 0) with the next (t = 1) (also assume that  = 1 for simplicity). You would discount today’s happiness by  = 1 (so, not at all) and the next period’s happiness by  = 0.5, or half! So if you receive $100 today you value it at $100, and if you receive $100 tomorrow you value it at only $50 (in current value). In other words, $100 tomorrow would only make you $50-worth happier today. But now let’s compare t = 100 to t = 101. This gives  0.0099 for day 100 and about  0.0098 for day 101. So if you receive $100 on day 100 you value it at about $0.99 (in current value), and if you receive $100 on day 101 you value it at about $0.98. The difference between the two times is the same as the first example – just 1 day – but the difference in your valuation is only $0.01, rather than $50.

Now, we don’t know that these empirical studies necessarily reveal anything about the “best” way to discount the future – the people studied may not have been acting optimally. At this point, however, there is no consensus among economists as to which model of future discounting is optimal, or even which is most accurate. We don’t know whether these people were maximizing their lifetime happiness or whether they were following some other behavioral strategy, but if they were acting optimally then that’s good news for hyperbolic discounting models. Hyperbolic discounting seems to work well to explain the observed data in general, and may be a good, easy approximation.

I’m usually against using rules of thumb or heuristics to solve these sorts of problems, but I actually think that using my anti-YOLO prescribed saying, “Live like you’ll live an average lifespan conditional on your particular demographic characteristics!”, actually works pretty well. It essentially prompts you to consider the cumulative lifetime happiness maximization problem rather than the current-period maximizing problem (which is usually suboptimal). When you say this in your head or out loud, you should be considering:

-The potential for tradeoff in payoffs between time periods

-Your expected lifespan

-How payoffs may affect total lifespan

-How your preferences may change in the future (anticipate your future needs and try to predict how happy certain things will make you in the future as you grow older)

-How to appropriately discount the future, and

-The amount of uncertainty and risk surrounding each of your estimations (how wrong you could be and the consequences of being wrong)

Unfortunately, many of these things are person and preference dependent, so it is hard to give answers or advice that is any more specific. Ultimately, I am leaving you to figure out the solution to your own lifetime happiness maximization problem. Your success depends on how well you can accurately and optimally incorporate all of the things I mentioned above into your optimization problem formation, in addition to how well you can resist the allure of the instant gratification offered by the YOLO effect. For this problem, you are in the best position to estimate the maximizing solution, since you have the most information about yourself and how it might affect the form of the problem.

Essentially what I am saying – and I think that this creed is one that many economists would adopt in problems that rely so heavily on the individual – is one thing: “You do you.”



[ii] See for more.







PDF Version

Thought: The Value Of Reading A Blog Post


By Kevin DeLuca


Disclaimer: the estimated average opportunity cost of reading this ThoughtBurner post is $1.22

I’ve been wondering whether blogs are valuable or not, so I decided to construct a way to approximate the opportunity cost of reading a single post. By reading any of these posts you are, of course, foregoing the opportunity to do other things – more productive things. If you’re reading this post at work, for example, you could instead be working and making money; if you’re reading this post in your college class, you could instead be paying attention to the lecturer whose salary you’ve already paid. You could also be reading this during you leisure time, in which case you are not losing money (by not working), but instead you are choosing to spend your leisure time (which has some value to you) reading this post (still a cost).

Since I have greater-than-zero readers, I can assume that the blog isn’t totally worthless. In fact, everyone who reads a post on this blog is at least willing to pay the opportunity cost of reading it. This means that your expected value of reading a post is at least as big as the opportunity cost, otherwise you wouldn’t read it in the first place. So, the opportunity cost of reading a post can be thought of as a conservative estimate of the value of any blog post.

But how much is the opportunity cost? Well, the answer depends on who you are and when you read the post. The opportunity cost of reading this while you should be working is much higher than if you read this during your free time. It also obviously depends on how much time it takes you to read a post; this is just a function of how many words there are and how fast you can read. This post has about 1,659 words, and on average, people read at somewhere between 250 and 300 words per minute (wpm)[i]. Next, all we need is a guess at how valuable your time is under certain scenarios, given that you could be doing something else.

For the sake of simplicity, I will first consider three different cases: an average U.S. worker reading this specific blog post during their work hours, an average worker reading this post in their leisure time, and a resident undergraduate student at The University of Texas reading this during class (many of my readers are my friends and many of my friends are or recently were UT undergrads). I’ll also base my estimates solely on the opportunity cost of reading the article, though a more accurate estimate would include time costs of analyzing graphs and looking at pictures and exploring the other articles which I’m sure you’re all doing.

  • If you’re an average worker in the U.S. reading this while on the job:

Assume you make the median income in the United States, which is $51,939[ii]. Also, assume that you work an average number of work hours each year, which is 1,788[iii]. That means that you earn about $29.05 per hour worked, or about $0.48 per minute. Every time you read a blog post, therefore, your opportunity cost is $0.48 for each minute you read. Let’s assume that you’re a fast reader (300 words per minute) for more conservative estimates. Since this blog post is 1,659 words long, the opportunity cost of reading this article to an average U.S. worker is about $2.67. (Notice, however, that you still get paid for the hour worked, even though you stopped working to read this post. In a way, you have passed the opportunity cost onto your employer, who was technically paying you to read a blog post for a few minutes. Don’t worry, I won’t tell.)

  • If you’re an average worker in the U.S. reading this outside of work hours (leisure time):

The value of leisure time is a tricky thing to estimate – it isn’t directly measureable and partly depends on your income. After reviewing this research paper[iv] by Larson and Shaikh (2004), it looks like estimates put it around $11.27 per hour (about $0.19 per minute) on average (see Table 4). This is probably a low estimate for working people and a high estimate for college students, since the former are more valuable than the latter (no offense). Using this estimate and average reading speeds (300 wpm), the opportunity cost of reading this blog post (during leisure time) to an average U.S. worker is about $1.04.

  • If you’re an undergraduate resident college student reading this during class:

Probably not everyone who reads this is a working professional, and since I’m currently attending UT and have a lot of friends here I’m guessing that they’re mostly the ones reading this. College students’ time needs a different value estimate, since they aren’t foregoing work to read each article. If they are reading this during one of their classes, however, they are not taking advantage of something that they’ve already paid for. In this sense, they are losing money, and this is their opportunity cost.

I’ll assume that you pay the in-state, full-time student tuition. The lowest tuition charge is in the college of liberal arts at $5,047, so we’ll use that to form a lower bound of the estimate. Let’s assume that on average students take 15 hours, which means they’re in class for 15 hours per week. Each semester is 15 weeks, which comes to 225 hours per semester. The resulting cost per hour of attending a class at UT is $22.43, which I’ll round down to $22 an hour, or about $0.37 per minute. Again, we’ll use 300 words per minute, and this blog post is about 1,659 words long. In this case, the opportunity cost of reading this blog in class to a UT student is about $2.07 (and it will be higher for non-residents, non-liberal arts majors, and people who take less than 15 hours).

Not too bad! Looks like this ThoughtBurner blog post is worth somewhere between $1.04 and $2.67.


A Deeper Look:

To explore further, I created a few charts that provide opportunity cost estimates under a larger range of conditions. For example, as I mentioned above the economics literature suggests that people’s leisure time is valued differently depending on your salary or hourly wage. Also, I take into account the fact that different majors at UT pay different amounts for tuition, and out-of-state tuition is much higher than in-state tuition. Lastly, I provide estimates of the opportunity costs of reading other things, like a random article from The New York Times, a random Politico article, and the cost of reading certain books. The results follow.

Workers, In Detail:

Below is a chart of the opportunity cost of work and leisure time for each quantile of income earners in the United States[v] in both hours and minutes, and I’ve specifically calculated the cost of reading this post for each group. The calculations are made assuming a 300 wpm reading speed and working an average number of hours each year (1788). Leisure costs are from Table 4 in the paper referenced above.


UT Students, In Detail:

Below is a chart of how the opportunity cost of reading this blog differs across majors and in-state vs. out-of-state students. The calculations are based on a 300 wpm reading speed and a student who takes 15 hours (15 hours per week in class)[vi].



Since tuition costs are pretty similar given resident status[vii], there isn’t much of a difference across majors, but notice that the opportunity cost of reading this blog in class for out-of-state students is almost four times higher (because their tuition is almost four times higher). Also, the hourly opportunity cost for students is of interest because it could be thought of as the opportunity cost of skipping a one-hour class. On average, this amounts to $23.56 an hour for resident and $80.68 for non-residents. Moral of the story: DON’T SKIP CLASS! If you’re a UT student, you’re paying quite a lot per hour to be educated.

Popular Readings:

For the popular readings, I calculated the average opportunity cost for workers both during work hours and during leisure time for each item in the chart.


The NY Times article and Politico article were just the most recent links I found among my Facebook newsfeed. The Politico article was only 378 words, and the NY Times article was an opinion piece and was 1,180 words total. The average book, according to this[viii] article, has 64,531 words. And the entire Harry Potter series has 1,084,170 words according to this[ix] website. For those of you who have read the entire series during your free time, just know that it cost you nearly $800 worth of your leisure time.

It seems unlikely that you would read an entire book much less the entire Harry Potter series while at work, but, let’s be real, people read Politico and the NY Times all the time during work hours. Now we all know how much it costs our employers when we ‘take a break’.

Of course, you don’t actually know the exact opportunity cost of reading a blog until after you’ve spent the time to read the post. In the name of efficiency, I think from now on I will put the estimated opportunity cost of reading each post right under the title, that way everyone has better information before making their decision. Now, the problem I’m stuck with is how to get people to just direct deposit the money rather than reading ThoughtBurner posts.



[ii] 2013 data. (Table 1)




[vi] In order to calculate weighted averages, student enrollment totals by department for in- vs. out-of state was calculated by assuming each department has the same proportion of students as the university overall. (79.7% in-state and 20.3% out-of-state).

Student Ratio Source:

Enrollment Source:





PDF Version


Thought: Daily Optimization


By Kevin DeLuca


We are constantly faced with optimization problems throughout the day: What will be the fastest way to get to work today? Do I go through the drive-through or park and go inside? Should I commit to reading this ThoughtBurner post or do something else in my free time? We want to be as happy as possible, which means that for each of these choices we try to choose the best or optimal decision. Luckily, we’re usually pretty good at figuring out the best course of action and live pretty efficient lives.

Suppose, however, that at every little decision you made during the day, you chose a less-than-optimal choice. You would have been happier doing something else, but for some reason you didn’t take the time to think hard enough about your decision. You made a mistake – you took the busier route to work today. A single optimization error may not make much of a difference, maybe only a few wasted minutes of your life. But imagine that out of the estimated thousands of decisions we make every day, 10 or 100 or even 1000 mistakes are made. The effect starts to add up until you realized you wasted a ton of your time or gotten yourself stuck in a bad mood.

I believe there is a huge loss in efficiency and happiness due to our natural inability to optimize every single little decision-making problem we face during a typical day. While a wasted minute here and an inconvenience there may not seem like a big deal, the effects start to add up over time and across people.

Why do we make these mistakes? The most obvious barrier is that, often, the potential benefit of ‘solving the optimization problem’ for these little, everyday decisions is less than the mental cost of that calculation – that’s why you skip it in the first place. We are efficient and instead of solving each problem individually we use heuristics to answer them, usually by making assumptions or acting out of habit. But our assumptions aren’t always right, and our habitual actions aren’t always optimal. In fact, a lot of work in applied economics involves testing whether or not assumptions are true or not. In many of our daily optimization problems, the tools of economics could help us make better choices. We could learn, for example, under what conditions it would be better to take the drive-through rather than park and go inside.

Professional economists don’t worry about helping people solve the little decisions – usually they have their sights set on bigger fish, and rightly so. But the questions that are interesting to non-economists are about the little problems they face all the time, and I think that by framing and solving these problems with economics we could make a lot of people’s lives more efficient and happy. Using economic methods, we can test the validity of certain assumptions and, from there, dispel inefficient or unsupported behavioral prescriptions.

The cost to figure out many of these assumptions – the cost of performing research, collecting and analyzing data, along with disseminating the information – is too high for any one person to do. The extra minute or two saved by a more efficient decision isn’t very useful if it takes days or months of your time to figure it out. But if someone else provided you with the information, then you could costlessly incorporate it into your decision making and better optimize. And if enough people used the information, we might see a net overall benefit – the months of research might save, in sum, years of time, minute by minute, person by person.

Enter ThoughtBurner. Part of what this blog and website hopes to accomplish, in addition to providing unique economic commentary on variety of issues, is to incur the cost of research and analysis for the ‘little things’ in life, the daily optimization problems. Drive-through or park and go inside? Let’s do observational research and look at the evidence. How drunk should you really be before you hit your beer pong skill level peak? Time to record stats on beer pong players. Should I like my friends Facebook post or not? These are the sorts of questions that people ask themselves all the time, but that have never been inspected with the rigor of formal economic analysis, which is exactly what ThoughtBurner research is meant to provide.

My hope is that people will learn something useful from ThoughtBurner, whether it’s the answer to an everyday optimization problem they’ve faced or a new way for them to frame an old issue. I figure that, while I don’t know quite enough about economics to answers the ‘big questions’ (yet), I can probably handle the smaller ones, and I hope to do it in a way that is practical, entertaining and useful for everyone.


PDF Version