Thursday, March 23, 2017

Naming streets

I believe in that simple things done right are the bedrock of society: the bus line that's always running; the convenience store around the corner that's never out of bread, milk, or toilet paper, even during the worst snowstorms; or the reliable local newspaper. But there's perhaps no greater collective failure in this country than our massively incompetent ability to name streets properly. Naming streets should be as simple as 1-2-3:

  1. A contiguous street gets a single name.
  2. A name is used only on one street per city.
  3. Name them in a pattern that's helpful for navigation.

To be clear, I'm talking about street names, not route designations, like U.S. 52 or State Route 39. A road could have a street name as well as a route designation, or even two route designations or more if geography forces the routes to consolidate for a stretch: Johnson Pass Rd. could be U.S. 52 and S.R. 39 all at the same time.

These rules seem clear, right? Rule 1 requires a little definition: a "street" may pass through multiple intersections in a straight or gently curving manner but must actually cross the other street. In other words, a "street" doesn't take a right angle at an intersection. Rule 2 requires a little clarification; let's allow "Maple Avenue" and "Maple Place" as two separate names, provided that they follow rule 3 by being close together--maybe even intersecting. But I don't believe cardinal indicators--"West Maple Avenue" and "East Maple Avenue"--ought to be allowed for separate streets. Those should be reserved for different sections of the same road.

The most common way the rules are violated is that two non-contiguous streets will get the same name. On a map, they're a straight shot, right in line with each other, but maybe there's a natural obstacle in the way, like a river. If I can't drive (or at least walk) from one end to another without turning, it's not one street; it's two. Give the two streets on the opposite river banks two different names.

You may think this doesn't seem like a big deal, but maybe I'll change your mind when I present to you the worst named street in the United States: Old Hickory Boulevard in Nashville, Tennessee. Look upon these maps and despair for your sanity. Our journey begins at Whites Creek, to the north of Nashville.


Crossing Eatons Creek Rd:


Crossing route 12, you may start to get an ominous feeling, noticing the Cumberland River to both the west and east:


Sure enough, you've hit a dead end:


This is west of Nashville.

Old Hickory Boulevard now jumps the river:


Please note: route 251 south of Old Charlotte Pike is Old Hickory Boulevard. Route 251 north of Old Charlotte Pike is a different road.


Old Hickory Boulevard jumps here, and gets a new route designation: route 254.

Next, OHB meanders along the south side of Nashville. Granny White is not exactly due south, but pretty close.


OHB now winds through Brentwood.


True story: I remember sitting in a Pargo's in Brentwood as a child when a tourist came into the restaurant in tears. "I've driven from one end of Old Hickory Boulevard to the other and I can't find this address!"

The manager took one look at her address. "Oh, this address is Whites Creek. That's the north side of town. This here's the south side." Hope you aren't in a hurry...

Now, watch what happens carefully after crossing 41A.


Did you see it? Old Hickory, which was route 254, took a right turn. Route 254 is now Bell Road.

OHB takes another jump:


As far as I can tell, that little section there is Pettus Rd.

Keep your eyes peeled:


Boom! Another right turn for you! Can't you just imagine a couple driving south on Old Hickory after getting off I-24 and the navicomputer is telling them to turn right onto OHB?

"But TomTom, I'm already on Old Hickory!" as they just breeze right onto Burkitt Road.

Maybe they'd have better luck if they got off I-24 going north on Old Hickory?


Nope.

BTW, Route 171 is now the third route designation. So what happens after that right turn off 171?


Old Hickory Boulevard vanishes at the star. The road used to continue, but then T.V.A. built a dam on the Cumberland river, creating Percy Priest lake to the southeast of Nashville. A section of OHB still exists under that lake. Does it confuse boat tourists as much as the land sections confuses car tourists?

Wait, Old Hickory was a ring road. Does it continue on the other side?


Hello? Anyone seen a crappily named road?


Oh, there you are!


And another jump!


And we're back on solid land. You'll notice Old Hickory now has its fourth route designation, route 265.

We'll just cross I-40. Now you'll recall that OHB already crossed I-40 once before (when OHB was route 251). That means we're now on the opposite side of Nashville: the east.


We just follow OHB north for a bit.


Hermitage, by the way, is the name of Andrew Jackson's house/plantation. Andrew Jackson was nicknamed Old Hickory because he was nuttier than a squirrel's poop.

Let's see... we'll just keep going north.


"Wait, WTF? We're on route 45 now? I thought we were on route 265... We must've changed back there. TomTom still says we're on Old Hickory, hon. Good ole Old Hickory won't let us down, right?"

OHB, now route 45, takes a northwest hook here because of the Cumberland river on both sides. (Like Old Hickory, it's everywhere in Nashville.) Here's the map:


"Oh look, dear, there's a neighborhood called Old Hickory! Oh, how cute."
"Son of a..."

Now, it happens to be worth zooming in a little bit on Lakewood neighborhood first:


That's right, folks. It has two names. Hadley Avenue and OHB. It's officially broken all the simple naming convention rules and spiked the ball in the end zone.

But now, let's see what happens a little to the north, in Old Hickory neighborhood:


Nothing good for our tourists. OHB just disappears. (Hadley Avenue, the jerk, continues to the right.) Why is the neighborhood called "Old Hickory" when Old Hickory Boulevard doesn't run through it!!

Where did that wascally street go?


Oh, it magicked itself across its eponymous neighborhood. Right. To be clear, that whole section of route 45 I haven't marked is all Robinson Road. All the time. Sure, the locals who are just running down to the Piggly Wiggly know they turn on OHB which then becomes Robinson. But streets aren't named for locals, are they?

In case you're wondering, Old Hickory Community is where all the lost tourist children go to live, if their mums or dads can't navigate the streets of Nashville and pick them up by closing time.

Surely, surely, surely, OHB has pulled its last trick?


This one is a doozy. You'll notice an East Old Hickory Boulevard to the south of route 45. That's odd. Why would the East OHB be south of regular OHB?

Because route 45 ain't OHB any more.

East OHB is it. The best part is what happens inside that star. The name jumps from 45 to the surface street--but there's no physical connection. (Also, let's point out the East OHB goes around its corner, and that non-intersection changes its name to Sandhurst Drive.)

"Getting lost is... just a way to have an adventure, dear! Just... um, wasn't planning this and we're low on gas..."
"Oh look, hon, an Old Hickory Community. Maybe they can help us!"

If there's an East OHB, is there a West OHB? Indeed there is:


but you gotta take another jump.

OHB is nearly out of tricks, though:


At the star, it changes names from West OHB just back to plain vanilla Old Hickory Boulevard. BTW, crossing I-65 a second time means that we're on the north side of Nashville again.

A few more miles--crossing I-24 a second time--and we're back to Whites Creek:


You can almost hear the tourists wailing: "I just wanted... [sob] just to see... some country music stars' homes! I didn't want to drive all around creation!"
"And where are our children?!"

Let's do the numbers:

Route designations: Five (251, 254, 171, 265, and 45)
Two street names simultaneously: Yes (OHB and Hadley)
Street takes a right turn: Three times (all between 41A, I-24, and 171 in southeast Nashville)
Jumps over water: Three (Cumberland river, Percy Priest lake twice)
Jumps over other roads: Four (251 to 254; over Pettus Rd.; from route 45 to East OHB; from East OHB to West OHB)
Jumps over neighborhoods: One--but double points because it's eponymous
Switching names while driving down the same street, not otherwise covered: Two (West OHB turning back into OHB; East OHB turning into Sandusky Rd. The West OHB to regular OHB could be OK, I guess... No one is going to get lost if the numbering makes sense... which it doesn't.)

I think this deserves a total of 15 naming violation points: +1 for two names simultaneously, +3 for right turns, +9 for jumps, +2 for two name switches. (Or maybe 14 points, if you're cool with West OHB to OHB.)

I defy anyone to come up with a worse named street in the U.S. Map-based proof required.

BTW, in case you couldn't tell, I'm originally from Nashville. No offense is intended; I think it's fair to poke a little fun at your hometown.

Sunday, February 26, 2017

Gerrymandering

A federal court recently struck down a gerrymandering scheme in Wisconsin in a case that could set a major precedent for the country. Once every ten years, after each Census is completed, the boundaries for House of Representatives districts have to be re-drawn to keep their populations equal. The U.S. Constitution leaves it to state legislatures to decide how to draw these districts. Gerrymandering is the intentional abuse of that power; legislatures might gerrymander to keep minority groups out of power or to benefit one political party. The longtime practice of gerrymandering has always had its critics. As President Obama recently said, “Politicians should not pick their voters; voters should pick their politicians,” though Obama didn’t coin the phrase and wasn’t the first to express exasperation about gerrymandering.

Contrary to popular opinion, gerrymandering isn’t about protecting incumbents by giving them safe districts. The actual process of gerrymandering involves two steps: packing and cracking. Packing is the placement of your opposition’s voters into a few, concentrated districts. Cracking is the distribution of the remaining opposition voters into districts that they can’t win. Here’s what a gerrymandering scheme using packing and cracking could look like:

Possible party B gerrymander

Votes for party

District
  A
  B
Winner
1
  95
    5
A
2
  45
  55
B
3
  45
  55
B
4
  45
  55
B
5
  45
  55
B
Total
275
225


District 1 is packed with party A supporters. Party A’s remaining voters are cracked across the other four districts, which they can’t win. Even though party A received 275 out of 500 votes, or 55%, they win only one district out of five, or 20%. There’s no way party B could have gerrymandered this any better. Four districts are safe enough that party B will likely never lose those races, even in a bad election year for their party. Trying to give their party a bigger margin in any of those races would only make another race closer. Getting the right vote totals in each district may require drawing some unusual shape districts. Gerrymandering gets its name from an 1812 Massachusetts district map, approved by Governor Gerry, with one district that looked like a salamander. The map benefited his party, even though Gerry lost his own office for it.

The U.S. Supreme Court has never struck down a gerrymandering scheme that attempted partisan gain, only gerrymandering done to deprive minority groups of voting power. The Voting Rights Act prohibits racially motivated gerrymandering, and justices have also looked to the Equal Protection Clause of the Fourteenth Amendment. The Court has allowed the creation of districts where a minority group is a near majority of the voters to ensure that minority groups can elect their own representatives to Congress. In southern states, for example, African Americans vote so heavily Democratic, and white people vote so heavily Republican, that some districts must approach a 50-50 racial mix in order to elect black congresspeople. The Supreme Court has allowed this as long as race isn’t the primary factor in making the districts. Two racial gerrymandering cases will be heard by the Court soon, Bethune-Hill v. Virginia State Board of Elections and McCrory v. Harris, so the standards might be changing soon.

Partisan gerrymanders, however, have long been ignored, although Justice Kennedy has indicated that if a clear standard for judging gerrymandering’s severity could be found, he would rule against partisan gerrymandering as well. Along with the four liberal justices on the Court, Kennedy might bring forth a new Supreme Court precedent. The Court, by the way, cannot decline to make some ruling on the Wisconsin case.

The Wisconsin case is the result of an unlikely group of statisticians, political scientists, and lawyers attempting to serve up to Justice Kennedy a standard for judging gerrymandering. Their work is premised on the concept of a “wasted vote”: any votes above 51% or any vote in a lost race are considered “wasted.” In the hypothetical gerrymandering scenario, this is what the wasted votes look like:

Wasted votes in party B gerrymander

Votes for party

Wasted votes for
District
  A
  B
Winner
  A
  B
1
  95
    5
A
  44
  5
2
  45
  55
B
  45
  4
3
  45
  55
B
  45
  4
4
  45
  55
B
  45
  4
5
  45
  55
B
  45
  4
Total
275
225

224
21

Party B gerrymandered the districts to waste 224 of party A’s 275 votes. Party A’s wasted votes almost equal the total votes party B received! Of course, the plaintiffs would also have to prove that gerrymandering happened intentionally, but proving too many votes are wasted is the necessary first step. No mathematical evidence, no case.

Using the wasted votes standard proposed in the Wisconsin case, seven states have Congressional districts that are suspicious: Florida, Michigan, North Carolina, Ohio, Pennsylvania, Texas, and Virginia—all of them pro-Republican gerrymandering. In Pennsylvania, the Republican Senate candidate won 51% of the two-party vote, as did Trump. The Pennsylvania House delegation, on the other hand, will be thirteen Republicans to five Democrats, or 72% Republican. One reason all the current gerrymandering schemes are Republican is that the G.O.P. controlled more state legislatures after 2010, when the last re-districting was done.

Another standard proposed for measuring gerrymandering is to look at the median district. In the hypothetical gerrymander given before, the median district—the middle in a list from party A’s worst to best district—is a 55% to 45% result in favor of party B. Yet party A received 55% of the overall votes. This gap of 10 percentage points between the median district and statewide total is sizable.

Packing isn’t necessarily bad. A district could reflect a real community of interest, a group of people with similar social, economic, and political interests. For example, in Oregon, the Democratic candidate for Portland’s congressional district ran unopposed. The people of Portland share a similar enough view with the Democratic candidate that it deterred any Republicans from challenging the seat. The Supreme Court has ruled that predominantly African American or Hispanic districts can ensure minority representation in Congress and can serve a community of interest’s needs.

Likewise, cracking isn’t necessarily bad either. It depends on the ratio. A 50-50 split district is competitive. Even a 52-48 split could swing to the other party in some years. The real question is about one party being systematically disadvantaged by packing and cracking. So how does Oregon fare?

Oregon 2016 results in U.S. congressional races

% votes for *

% wasted votes for
District
Democrat
Republican
Winner
Democrat
Republican
1
  62
  38
D
11
  38
2
  28
  72
R
28
  21
3
100
  -
D
49
    0
4
  58
  42
D
  7
  42
5
  55
  45
D
  4
  45

* This is the two-party vote share; third party and write-in results excluded for simplicity.

District 3 is “packed” for the Democratic candidate who ran unopposed. Offsetting this is the fact that District 2—all of eastern Oregon—is packed for the Republican. However, Republican voters seem to be “cracked” into Districts 4 and 5, central-west and southwest Oregon respectively.

How does Oregon look on either measures of gerrymandering? The Democrats took 58% of the two-party vote share. The median district is district 4, and Democrats won 58% there, so the gap is zero. However, on the wasted votes measure, Oregon is not doing as well. Democrats wasted 326,030 out of 991,008 votes statewide, or 33% wasted. Republicans wasted 524,332 out of 709,716 votes, or 74% wasted. Ideally, both parties would waste about 50%. The divergence between the two measures of gerrymandering—one good, one not-so-good—is why the Supreme Court wants to settle on one standard, not two or more competing definitions, of partisan gerrymandering.

Based on Oregon Republicans winning 42% of the two-party vote, the state might be expected to have about two Republican congresspeople out of five. One could imagine an alternative to the current district 4 and 5 arrangement that shuffled counties into two new districts: a greater Willamette Valley district comprising Salem, Albany, Corvallis, and Eugene, solidly Democratic; and a U-shaped Cascades, south-central, and coastal Oregon district, leaning Republican. This would move one of the districts into the Republican column. However, it’s often difficult to shift a few voters around and create balance as measured by wasted votes. The standards that people have proposed only kick in when gerrymandering creates a two-seat difference or more because it isn’t always possible in small states to make districts balanced. Geography can get in the way.

Some political scientists have proposed using computer programs to draw district boundaries, but this doesn’t solve the root of the problem. For example, a program might try to create more compact districts. That tends to pack Democrats into small, round city districts, wasting Democratic votes. Alternatively, a program might try to create short, straight-line district boundaries, cutting a state into districts like you might cut a cake into irregular polygons. That tends to pack Republicans into large, rectangular rural districts, wasting Republican votes. The bias in the program comes from preferring one type of shape to another. Natural and human geography can necessitate all different shapes to reflect real communities of interest. An eastern Oregon district makes sense, as does a coastal Oregon one, but one district is a near square and the other would be pencil-shaped.

The best hope is for states to put non-partisan commissions, not state legislatures, in charge of drawing reasonable boundaries. Iowa has a long-standing commission; Arizona, California, and New Jersey have newer commissions. There are strengths and weaknesses to each state’s set up for its commission, but the outcomes have been better with commissions than without. Perhaps the threat of losing a federal case for gerrymandering will persuade more state legislatures to enact a non-partisan option, only 204 years after Governor Gerry learned his lesson the hard way at the hand of Massachusetts voters.

Saturday, February 11, 2017

The Logit Score: a new way to rate debate teams

I recently published an article on a new debate team-rating method I invented, called the logit score. I hope the logit score will take its place among win-loss record, average speaker points, median speaker points, opponent wins, ranks, and so on as an effective way to rate (and thus rank) debate teams at a tournament.

What is the logit score?


The basic idea is simple: the logit score combines win-loss record, speaker points, and opponent strength into one score using a probability model. In other words, the logit score is the answer to the question, "Given these speaker points and these wins and losses to those particular opponents, what is the likeliest strength of this team?"

Let's take a step back and acknowledge a truth not universally acknowledged in debate: results should be thought of as probabilities, not certainties. A good team won't always beat a bad team--just usually. Off days, unusual arguments, mistakes, and odd judging decisions all contribute to a slight risk of the bad team winning. The truly better team won't always prevail. That means actual rounds need to be thought of as suggesting but not definitively proving which team is better. Team A beats team B. Team A is probably better, but then again, they could have had off day, been surprised by a weird argument, or had a terrible judge. If team A got much, much higher speaker points, it was very likely the better team. If team A only edged out team B by a little bit, then the uncertainty grows.

That's where the logit score comes in. Estimating team A's actual, true strength depends on putting together all of those probabilities and uncertainties into one model. I won't get into the specifics (the details are in the article), but the basic idea is using a logistic regression to put the probabilities for wins and losses to specific opponents as well as specific speaker points received together. The logit score for a team means: "If team A were estimated to be stronger, these results would be a bit more likely, but those other results would be far less likely. If team A were estimated to be weaker, these results would be far less likely, even though those other results would be a bit more likely. This logit score is the proper balance that makes all the results most likely overall." Because it factors in all the results in one probability model, the logit score isn't sensitive to outliers: unusually high or low speaker points, losses to outstanding teams, and wins over terrible teams don't affect the logit score much at all.

Does the logit score have any empirical results to back it up?


Yes. This is the bulk of my article.

I took a past college debate season, used those results to give every team a logit score, and then looked to see how well logit scores "retrodicted" the actual results in a season. That is to say, how often did the higher logit scoring team win rounds against the lower logit scoring team? As a baseline of comparison, I also did the same kind of analysis by ranking the teams by win-loss record.

The logit score rankings got slightly more rounds correct than the win-loss record rankings.

The slightly higher accuracy is not, on its own, a reason to rush to adopt logit scores. It merely proves that the logit scores aren't doing anything crazy. For the most part, the logit scores reshuffles teams ever so slightly with their nearest peers. The moves are slight ups or downs, not drastic shifts.

The real reason to consider using logit scores is that (a) they are less sensitive to outliers, which can matter a lot for a six or eight round tournament; and (b) they factor in more information. Win-loss records only use speaker points as a tiebreaker; it's secondary. Measures of opponent strength usually come third. In other words, a team with a really tough random draw and goes 4-2 as a result of dropping the first two rounds might miss out on breaking if no 4-2s break--win-loss record comes first and opponent strength won't factor in in that scenario. The logit score on the other hand--because wins, points, and opponents are all factored in at once--could reflect that this team is in fact very strong because it only lost two rounds to very good opponents. (See how important it is to be less sensitive to outliers?) More information also rewards well-rounded teams: those that win rounds on squeakingly close decisions and don't receive great speaker points are penalized more under a logit score system than a win-loss-then speaker points-system.

Thursday, March 31, 2016

Standards-based grading; standardized testing

It's been a while since I've written anything--life gets in the way. Mostly, I've been working on my new book, Statistics for Debaters and Extempers, which is 23/29 written. I keep writing chapters but adding one new ones to the list. It's like the Winchester House. However, I do have some thoughts I want to share about teaching.

One post I'm proud of is the one about grading. Percent grades are not very informative for teachers. Standards-based grading (SBG) is far better. If you're not familiar with SBG, let me explain it really briefly. The idea is to note for each standard (skill or knowledge students are supposed to learn) for each assignment, you mark a score that the student earns. These scores are often 1 to 4, where 1 is "not demonstrated at all"; 2 is "developing"; 3 is "demonstrated"; and 4 is "mastery". Or some such other scheme. For example, on a math test on fractions, a student might receive a 4 on the adding fractions standard but a 3 on the multiplying fractions standard. All the other standards for the year for that test would be left "N/A". SBG can exist side-by-side with a percent grade, too.

Ideally, students would be assessed on each standard multiple times. They could demonstrate mastery on the standard on tests, homework, or projects. Students should be able to show at least a 3 on a standard multiple times, say three times, to earn an overall 3 on it. A SBG scheme might also look only at the most recent three times a standard has been assessed. For example, a {2, 3, 3} could be coded as a 2, a {3, 4, 3} coded as a 3, and a {3, 4, 4} coded as a 4. The student earning a 2 wouldn't be penalized; they'd be given another chance to earn a 3. The other two students who earned 3's and 4's wouldn't need another assessment.

One thing I hadn't thought about before: SBG opens the door to indicating to students which test, quiz, and homework questions reveal which level. For example, one could mark questions as 2's, 3's, and 4's. A teacher could explain that getting all the 2's right is a necessarily developmental step but not an endpoint. A student who can answer all the 2-level questions right should recognize the achievement but push himself or herself to do the 3-level questions. Likewise, a student getting all the 3-level questions right should recognize the achievement but push to do 4's. It basically, to use a buzzword, allows the teacher and student to differentiate the work they do. Kids at the top could be told, "When you do your homework, spend half the time on 3's to prove you can do them, and spend the rest of your time doing the 4's for exercise." Kids in the middle could be told, "Spend a third of your time on 2's to prove you can do them, a third on 3's to really exercise, and a third on 4's to see if you can really stretch." Kids at the bottom could be told to spend equal time on 2's and 3's. It gives every ability kid a chance to do comfortable practice and also practice time for growth.

* * *

A completely random idea: why do we have the S.A.T.? I think the biggest reason colleges want to keep it is because it is hard to know what schools' curricula cover and what their grading means. Grades from one school aren't really comparable to grades from another.

But what if the S.A.T. 1 format (you know, one hour each of math, reading, and writing) was basically ditched in favor of the S.A.T. 2 / A.P. subject style tests? Colleges could verify what each schools' transcript actually meant. Even if the tests aren't necessarily accurate for individual kids, they would be accurate for an entire schools' worth of test-takers.

Here's how I imagine it working. Gone are Saturday tests. Gone are students being solely responsible to sign up (this harms poor kids and kids who are the first in their families to go to school). It is the school's responsibility to look at the different test options and sign the kids up for the right tests. These tests would happen in May, during the school day, just like the A.P. tests do.

Math, English, and foreign languages would only need to be tested in the May of junior year. Obviously there would need to be a different exam for each foreign language. The English exam could have two options, say, a regular level exam and an honors level exam. (I imagine a vast chunk of material that overlaps between the two so that scores are comparable.)

Math would be a bit tricky. There would need to be several different exams reflecting the fact that juniors end up in very different places. The school would be responsible for guiding students in the different classes to pick the right exam. I imagine these tests would be about three hours, like the current A.P. tests are.

Sciences and history would be even trickier. Every student basically takes biology, chemistry, and physics but the order differs from school to school. Most schools do biology in freshman year, but some start with physics. In history, the usual sequence is world history, European history, and U.S. history, but there are many deviations from that pattern. However, this seems like it is a surmountable problem for the test designers. The bigger problem to me is making sure that these subject tests don't get bloated and require extensive cramming of facts and instead test higher level scientific and historical reasoning skills. (These subjects are the A.P. tests that come in for the most abuse for this issue.) To keep things balanced and prevent bloat, each of these tests would be kept to one hour.

Basically, I'm talking about expanding the A.P. tests for all students, not just at the honors level but also at the regular level. Everyone submits ten scores: math, English, foreign language, three sciences, three history, plus one more of their choice (could be computer science, or economics, or art history--whatever they want). Junior year, we're talking about a week of testing, but in sophomore and freshman year, it would only be two hours of testing (science plus history), so they would more or less have normal classes during that week. It's even possible to devise a basic schedule:

Monday - English
Tuesday - Sciences + optional tests
Wednesday - Languages
Thursday - History + optional tests
Friday - Mathematics

People complain about the inequity of A.P. testing, and I agree. But making the A.P. tests mandatory and putting the burden on schools solves that problem. And my system obviates the need for giving the S.A.T. 1, which is inequable because preparing for it requires work outside of school. This hurts the poor kids who won't be able get any additional help for it.

Sunday, October 18, 2015

Houses and Algebra

Buying and selling a house are major financial decisions, but ones where I believe a lot of people do the math wrong and do not properly determine their net profit or loss of homeownership. It is also a good example where students in an Algebra 1 class could understand how to build an equation.

In a traditional Algebra 1 class, an equation would be presented to students first, like so:

mx-y+0.94f-i=n

where m is the months of occupancy, x is the monthly savings of owning over renting, y is the monthly interest on the downpayment, f is the final sale price, i is the initial price, and n is the net profit. Got that? No? Who cares - here's 10 problems, plug in the numbers and go. I fail to see the point of it.

A better way to do it


Basically, let the students build the equation.

There are two things to consider, both related to opportunity costs. The first is the monthly cost to own a house - the mortgage, insurance, and property taxes - compared to monthly cost to rent. Utilities would be the same, so both columns of the ledger should ignore utilities. Let's define this as x, where x = monthly rental cost minus monthly cost to own. Students could work with some specific examples and determine what the sign of x indicates. This is knowledge students in Algebra 1 are still reinforcing. (A positive x indicates that it is cheaper to own. A negative x indicates that it is cheaper to rent.)

The second thing to consider is that buying a house necessarily entails tying up a down payment that could have been an investment. Call the monthly return on this investment y, the opportunity cost of not investing the money elsewhere. If the down payment is $50,000 and the interest rate one can get in a safe account is 3%, then y is about $125 per month. This variable is, of course, always positive. In Algebra 1, students wouldn't know how to calculate the monthly interest, but it is worth them knowing where that variable is coming from.

Next, I would ask students to think about the true monthly benefit to owning, giving them several different examples. After that, I would ask them to write a general expression for it (the true monthly benefit to owning is x - y) and ask them to explain what the sign of this quantity shows them. If this quantity is positive, the homeowner is saving money each month. If it's negative, the renter is saving money each month. This quantity needs to multiplied by the period of occupancy to come up with total savings or total costs to the homeowner.

Now onto sale price.

There are four possibilities. There are the two trivial-to-understand ones: (a) the homeowner both makes money on the sale AND saves money each month by owning, in which case the person clearly had made money by owning; and (b) the homeowner both loses money on the sale and on the monthly cost compared to renting, in which case the person has clearly lost money by owning.

The other possibilities are more tricky to understand: (c) the sale price is negative but the monthly cost is positive, and (d) the sale price is positive but the monthly cost is negative. In both cases, it depends on the specific amounts. Let's have the students work with some specific numbers to make sure that they see what's going on.

Let's say the homeowner is saving $400 a month on the mortgage compared to renting. The downpayment was $50,000, so that's $125 per month in foregone interest, so the actual monthly benefit to owning is $275. Now let's say the person lives in the house for 7 years. Perhaps the loss on the sale of the house is $20,000. (Don't forget to multiply the final sale price by 0.94 because of the real estate transaction fees when calculating the net profit or loss!) Did this homeowner come out ahead?

7·12·275>20,000

Just barely, but yes. In this case, the positive quantity of monthly savings (times months) is greater than the one-time sale loss.

As another example, consider someone losing $200 a month on the mortgage compared to renting (it's a very cheap rental market!). With a $50,000 downpayment, the actual monthly loss is $325. Let's say the person lives in the house for 5 years and realizes a profit of $15,000. In this case:


5·12·325>15,000

This is a net loss overall. The monthly loss (times months) is greater than the one-time profit realized on the sale.

At this point, students would be ready to write the equation after working with several examples. Furthermore, why not have students write equations with long variable names?

months of occupancymonthly cost over renting-interest+0.94·final sale price-intial saleprice=net profit

This is an equation they would actually understand, because they built it themselves, working with examples first, confirming what the signs of each part mean, and because it's verbose. Now they have some algebra knowledge and some real-world knowledge.

Here's the New York Times' rent vs buy calculator. And here's Vox on the matter, raising the good point that buying a home can force people to "save" in paying off the principal of the loan.

Saturday, July 4, 2015

Study of speaker points and power-matching for 2006-7

For my 100th blog post, I did an experiment to try different tabulation methods for debate tournaments. The benefit of an experiment is that the exact strength of each team is known and the simulated tournaments introduced random deviation on performance in each round. The deviation in performance is based on observed results.

The results of the experiment showed that, even after only six rounds, median speaker points is a more accurate measure of a team's true strength than its win-loss record. Furthermore, the results showed that high-low power-matching improved the accuracy of the win-loss record as a measure of strength (but only to the same level of accuracy as median speaker points) and high-high power-matching worsened its accuracy.

Description of the study


This experiment lead me to do an observational study of the 2006-07 college cross-examination debate season. I analyzed all the varsity, preliminary rounds listed on debateresults.com: 7,923 rounds; 730 teams. This was the last year when every tournament used the traditional 30-point speaker point scale. Each team was assigned a speaker point rank from 1 (best) to 730 based on its average speaker points. Each team was also assigned a win-loss record rank from 1 to 730 based on the binomial probability of achieving its particular number of wins and losses by chance. Thus, both teams that had extensive, mediocre records AND teams with few total rounds ended up in the middle of the win ranks.

Next, I analyzed every individual round using the two opponents' point ranks and win ranks. For example, if one team had a good point rank and one a bad point rank, then of course the odds are quite high the good team would win. On the other hand, if the two teams were similarly ranked, then the odds are much closer to even. Using the point ranks, I did a logit regression to model the odds for different match-ups. And I also ran a separate logit regression for win ranks. Here are the regressions:


The horizontal axis shows the difference in the ranks between the two opponents. The vertical axis shows the probability of the Affirmative winning. For example, when Affirmative teams were 400 ranks better (smaller number) than its opponent, they won about 90% of those rounds. These odds are based on the actual outcomes observed in the 2006-07 college debate season.

The belief in the debate community is that speaker points were too subjective -- in the very next season, the format of speaker points was tinkered with and changed. The community settled on adjusting speaker points for judge variability, that is using "second order z-scores." Yet my analysis shows that, over the entire season, the average speaker points of a team is a remarkably good measure of its true strength. Making a lot of adjustments to the speaker points is unnecessary.

First, note how similar the two logistic regressions are. A difference of 100 win ranks, say, is as meaningful for predicting the actual outcomes as a difference of 100 point ranks. Using the point ranks regression "predicts" 75% of rounds correctly, while using the win ranks regression "predicts" 76% correctly. Both regressions "predict" each team's win-loss record with 91% accuracy. (This discrepancy between 75% and 91% occurs because, overall, many rounds are close and therefore difficult to predict -- but for an individual team that has eight close rounds, predicting a 4-4 record is likely to be very accurate.)

What is impressive to me is that, even without correcting for judge bias, the two methods are very comparable. Bear in mind it is NOT because every team receives identical win ranks and point ranks. In fact, as you will see in the next section, some teams got quite different ranks from points and from wins!

Power-matching


In the second part of my analysis, I looked at how power-matching influenced the results. I could not separate out how each round was power-matched because that information was not available through debateresults.com. But college debate rounds tend to be power-matched high-low, which is better than power-matching high-high (as my experiment showed). I eliminated teams with fewer than 12 rounds because they have such erratic results. This left 390 teams for the second analysis.

The goal of power-matching is to give good teams harder schedules and bad teams weaker schedules. Does it succeed at this goal?

No:


I made pairwise comparisons between the best and second-best team, the second- and third-best team, and so on. It is common for two teams with nearly identical ranks to have very different schedules. The average difference in schedule strength is 68 ranks apart out of only 730 ranks, which is almost a tenth of the field! One team may face a schedule strength at the 50th percentile, while a nearly identical team faces a schedule strength at the 60th percentile. Bear in mind that this is the average; in some cases, two nearly identical teams faced schedule strengths 30 percentiles apart! I cannot think of clearer evidence that power-matching fails at its assigned goal.

Finally, I performed a regression to see whether these differing schedule strengths is the cause of the discrepancy between win ranks and point ranks.

Yes:


The horizontal axis shows the difference between each team's rank and its schedule strength. The zero represents teams that have ranks equal to schedule strength. The vertical axis shows the difference between each team's win rank and point rank.

Teams in the upper right corner had easier schedules than they should have (under power-matched) and better win ranks than point ranks. Teams in the lower right corner had harder schedules than they should have (over power-matched) and had worse win ranks than point ranks. Having easy schedules improved win ranks; having hard schedules worsened win ranks. The effect is substantial: r^2 is 0.49. Of course, some of the discrepancy between the ranks is caused by other factors: random judging, teams that speak poorly but make good arguments, etc. But power-matching itself is the largest source of the discrepancy.

Given that the schedule strengths varied so much, this is a big, big problem. I know that tab methods have improved since 2006-7 and now factor in schedule strength; this analysis should be rerun on the current data set to see if the problem has been repaired.

Conclusions



  1. Speaker points are just as accurate a measure of true team strength as win-loss record. This confirms the results of my experiment showing that power-matched win-loss record is at rough parity in accuracy to median speaker points.
  2. Power-matching as practiced in the 2006-07 college debate season does not give equal strength teams equal schedules. (This method is probably still in use in many high school tournaments.)
  3. Unequal schedule strengths are highly correlated with discrepancies in the two ranking methods, point ranks and win ranks.


One could argue for power-matching on educational grounds: it makes the tournament more educational for the competitors. However, it is clear from this analysis that power-matching is not necessary to figure out who the best teams are. In fact, it might actually be counterproductive. Using power-matched win-loss records takes out one source of variability from the ranking method -- judges who give inaccurate speaker points -- but adds an entirely new one: highly differing schedule strength!

Friday, June 5, 2015

College degree

My snide summary of marketing is, "Find people who are willing to pay more, then charge them more." Searching on a specific airline's website is an indicator that you are willing to pay more, so it costs more to buy directly from the airline than from an aggregator. Buying shampoo at the salon is an indicator you are willing to pay more. My favorite example: premium gas. It is not actually better for your car; it just costs more.

How about a college degree?

Certain selective colleges have managed to distinguish themselves as "worth more." Parchment has an innovative method for divining applicants' perception of schools' worth. They treated each applicant's decision as votes. For example, a student who got into Columbia, Duke, and Stanford and chose Stanford votes for Stanford and against Columbia and Duke. Parchment compiled all these votes using an Elo method to determine which colleges have distinguished themselves in applicants' minds.

How the schools managed to distinguish themselves is a great question. Many did it through their age - our oldest colleges are often the most esteemed. Others did it through the reputation of their graduate schools. Sports catapulted other schools onto the scene. However, selectivity in admissions is the key variable. Maybe it is because U.S. News and World Report's college ranking method weights selectivity so highly, but even without the U.S. News rankings, selectivity would definitely affect people's perceptions. (Side note: the rankings seriously distort college's behavior.) People assume hard-to-obtain goods are worth more.

Are these schools, in fact, worth more?


In terms of the content of the courses, there is probably little difference. For the most part, a course in differential equations at Yale covers about the same topics at about the same pace as one at Ohio State. It is important to understand that most college courses are not special snowflakes but (cough) commodities. Of course, college professors do invent new courses, and there are programs unique to an individual school. But many courses are commodities. One may get a better or worse teacher, but because schools don't place much weight on teaching in professors' evaluations, teacher quality and school reputation don't have much correlation.

Of course, course content and teaching is not the only variable that matters when talking about institution educational quality. Two colleges might teach similar courses but at differing levels of effectiveness. Good institutions have professors who keep standards for student work high; good institutions give robust support to weaker students; and good institutions develop new programs. Furthermore, due to the enormous endowments highly selective colleges have, they have a lot more money to spend per student - although much of the extra funding goes to facilities like dorms, athletic buildings, and student recreation centers that have little impact on the quality of instruction and to research facilities that may have only a small impact on undergraduate instruction. However, institutional quality hardly seems to justify the hysteria.

One could argue that there are intangible benefits to going to a high-reputation school like being surrounded by motivated, smart students and professors. While this has makes intuitive sense, the best evidence does not really support this argument. The C.L.A., the Collegiate Learning Assessment, shows little pattern between college attended and student learning. Some learn a lot at lower reputation schools; some learn little at high-reputation schools. One can discuss Shakespeare with other smart, eighteen- and nineteen-year-olds, but the discussion could be more enlightening if it includes a working mom who's back in college, a soldier who's back from war, ... you get my point. The student body argument cuts both ways; diversity is important, too. The C.L.A. results show that neither way is intrinsically superior. How much or how little students learn has everything to do with them and little to do with the college itself.

A giant meta-analysis entitled How College Affects Students wrote:

"The great majority of postsecondary institutions appear to have surprisingly similar net impacts on student growth. If there is one thing that characterizes the research on between-college effects on the acquisition of subject matter knowledge and academic skills, it is that in the most internally valid studies, even the statistically significant effects tend to be quite small and often trivial in magnitude."

Quoted from the New York Times. And the New York Times continues:

The whole apparatus of selective college admissions is designed to deliberately confuse things that exist with things that don't. Many of the most prestigious colleges are an order of magnitude wealthier and more selective than the typical university. These are the primary factors driving their annual rankings at or near the top of the U.S. News list of "best" colleges. The implication is that the differences in the quality of education they provide are of a similar size. There is no evidence to suggest that this is remotely true. When college leaders talk about academic standards, they often mean admissions standards, not standards for what happens in classrooms themselves.

Of course, that's about learning. Let's talk about earning.

Advantages of reputation


This leaves reputation alone as the way in which high-reputation colleges are worth more. Reputation means whether the degree will open the door to good entry-level jobs in a field and get a person off to a great start. And the evidence is that the path to many elite jobs runs through high-reputation colleges almost exclusively. Why are many elite employers so enamored of a few colleges?

Let's admit that the undergraduate degree itself does not convey much information about what a person learned. We may assume that a computer science major covered certain basics in the course of earning his or her degree. But that's about it. The degree provides low-quality information about how deeply that person learned in college. (In fact, it is basically impossible to fail out of a high-reputation college - they don't want to ruin their statistics.)  So why do businesses care about the undergraduate institution? The simple answer must be that the key information is about college admission. Businesses must believe that high-reputation colleges do a good job selecting the smartest and hardest-working students.

In some fields like law, the college (and law school) a person attended are always crucially important to hiring decisions. In other fields like computer science, the potential employers care far more about work samples and portfolios. While it is hard to make generalizations, for most fields, the reality is more like law. For many entry-level jobs, employers would be hard-pressed to come up with suitable work samples recent college graduates could submit, thus employers default to college reputation. Especially for the entry-level jobs that lead to elite jobs, employers recruit heavily - almost exclusively - from high-reputation schools, many going so far as to have dedicated H.R. teams for each school or special recruiting events. There are substantial employment advantages to going to an elite college in the person's initial job search that could have life-long effects. Once someone is shut out of this kind of entry-level job, it is hard to gain the experience to ever be considered for the culminating elite job.

The reputation of a college helps with starting a person out on the career path. A good start could have long-term financial benefits, so this might actually justify the reputation of some colleges as worth more. But my question is a different one. If what businesses are really getting is admission information, is this useful information? Are businesses right to think colleges are doing a good job selecting students?

Admission decisions


On the one hand, one can say colleges are selecting those students who are smart and hard-working - good traits for employers to seek. Let's stipulate that employers want to maximize both as much as possible; they want new employees with loads of content knowledge who can think flexibly. I am not going to engage with any question about the social implications of affirmative action or other admission policy, important though those questions are. I am merely addressing the question: Would employers be right to assume that the better the reputation of the school, the smarter and harder working its graduates?

After reading Mitchell Steven's book, Creating a Class, I realized that the defining fact for college admission officers is the lack of information. Despite S.A.T. and A.P. scores and other objective information, a lot of learning that students do is invisible. Can the student learn on their own, or do his or her scores hide heavy tutoring? Softer skills - like managing intellectual disagreements and debates, grit, research skills, and integrity - are hidden. Letters of recommendation only go so far to fill in the information gap. Smart, hard-working students at schools with overworked teachers and college counselors are at a disadvantage because they may not get high-quality letters. As a result, admission officers may revert to proxies, such as the reputation of the high school. (If employers are relying on the reputation of the college as a proxy, and colleges are relying on the reputation of the high school as a proxy...) This is what Shaun Harper found: great students at weak schools are overlooked.

To be fair to admission officers, students at weaker high school might never write an analytical essay, while students at great high school write one or more a week. The high school program does matter, but my point is that there are some students who would be capable but are not given the opportunity because of their high school's weak curriculum. While standardized tests are not great equalizers, without them, admission to selective colleges would be even more skewed to students who go to the best high schools. S.A.T. scores and A.P. scores give colleges some assurance that a student is exceptional despite attending a weak high school - but not enough to level the field. Colleges are not scooping up many hidden gems because they simply lack the information to do so. On top of this, of the smart, hard-working first-generation college students and minority students who do get in, many do not end up matriculating. So, these students also lack information. The bottom line is that college admission is not only about intellectual and personal capabilities but also about social capital.

Steven Pinker, the celebrity linguist at Harvard, points out facts about college admissions at selective schools that should unnerve everyone. Selective schools use holistic selection including academics, extracurriculars, and character. This disadvantages very smart but poor students who cannot afford to be well-rounded. Furthermore, it means that the student body, once at Harvard or other selective schools, spends a lot of its time in the same extracurriculars that helped them get in, and not as much on academics as one might expect. As Dr. Pinker heard a Harvard admission officer point out, their goal is not to train future academians but future leaders. (Is the fact that so many Harvardians go into finance -- the lucrative but well-beaten path -- an indication of the admission office's failure?) It is hard to believe that the extracurriculars are really a great proxy of leadership. Not to pick on any one activity here, but would an employer actually care that a person is an outstanding rower? singer?

And the other shocking issue is students from China. Most high schools there do not have college counselors, so a third-party system of packagers help get students into colleges. And the degree of fraud is truly shocking: 90% fake recommendations, 70% fake essays, and 50% fake high school transcripts. Check out the huge five-part expose in Reuters about cheating on the international S.A.T. test dates. Despite this, U.S. colleges continue to admit Chinese students in mass without demanding changes to the system. They could require gao kao scores. They could demand video interviews to prove English skills. Given that they do not make such demands, and given that colleges know the fraud problem, then do you have much faith in any part of the admission process? Colleges admission work is not so precise and thorough to justify business faith in college graduates absent other data. In light of this, college admissions officers' rhetoric that they are skilled at picking the best and brightest should make us incredulous -- and the effect of it on students is especially insidious. The data are just not that trustworthy to justify such boasts.

This is not a plea to go to a fully objective system where only standardized test scores count. One only needs to look at the gao kao to see the dangers. I think it is fine for colleges to have subjective opinions about potential students, just like it is fine for students to have subjective opinions about which colleges they like the best! This is why I argued in a previous post for a matching system. Instead, my plea is for two things: college admissions should drop the rhetoric of infallibility. Just admit that the college is looking for students who clear a certain benchmark and who fit. Second, businesses should recognize that college admission is a fuzzy science at best. Sure, one might value candidates from highly selective colleges more than those from semi-selective colleges, but making distinctions between Harvard grads and Vassar grads is folly.

Businesses have created a self-fulfilling prophecy. By esteeming some undergraduate institutions far more than others, they have reinforced the reputation of those schools and created an admissions rush for those schools. The weak link in the system is that college admission cannot make the sort of fine distinctions it is presumed to. One way to break the cycle would be to prohibit employers from campus recruiting events and from asking about a candidate's undergraduate institution, but I doubt that would catch on. However, it is interesting to think about how employers would be forced to deal with evaluating 22 year olds if they only knew that he or she had an undergraduate degree in a chemistry but could not ask about the institution. Would they ask more questions about what the candidate had actually learned?

These problems all stem from the relative paucity of information about college outcomes. At the top end, the lack of information creates a mad scramble for a few schools of sterling reputation. In the middle, many students at flagship states schools and solid private colleges have their excellence overlooked. At the bottom end, many students who are capable of getting into more selective schools do not bother. Nor do they realize how much their decisions matter financially, or even how to seek out financial aid successfully. Enter the Obama administration's proposal to create college rankings. Will that help combat the information desert that currently exists?

All prospective students, from top to bottom, care about two things: the quality of the education and the affordability. (Let's assume that college reputation is the current concern only because quality is so hard to assess currently. It's a very rare student who would choose something meaningless but prestigious. People always think other people do, however. Everyone likes to think he is the last idealist...) Would the proposed system address those issues?

Obama's college ratings


Obama has made a push for college ratings. They will largely eschew measures of quality, as it is so difficult to measure, and focus on graduation rates, affordability, and job prospects. One problem is that the schools that serve minority students, first-generation college students, and working students will by definition have lower graduation rates and unfortunately weaker job prospects. (While individual students might be improving their own prospects substantially, students at these schools as a group have weaker prospects than those for students attending more selective schools - a college degree knocks down some but not all barriers.) These schools may be doing a good job serving their students but get punished in the ratings, creating perverse incentives to not admit higher-risk students. It is worth including a social mobility score (i.e., SES diversity) in the rankings, which can help ensure colleges are compared to like-schools.

Graduation rates are relatively simple to compare. Instead of reporting the percentage of students who are done after six years, colleges should report median years until graduation (maybe also the 75th and 90th percentile years until graduation). Schools could be lumped into categories based on social mobility scores (perhaps six categories or so - all the most selective schools would go into one category, by the way) and compared against peer schools. Students could see whether one school is dramatically worse than its peers - maybe indicating that the school puts too little effort into counseling. Or the more mathematically accurate way would be to use a regression. Based on a college's student population demographics, schools could be ranked on the difference between what is the expected from the regression vs. the actual graduation rates.

What about affordability? First of all, the actual price, not the sticker price, ought to be used, so scholarships and other discounts ought to be factored in, plus the length of time it actually takes to complete the degree, and books. The other complication is boarding. Commuter schools need to be separated into a separate category. However, it is well worth the effort: giving simple prices to students would be a huge help to many first-generation college students! The hard part is that what people really care about is not the total cost of the degree but the cost compared to the expected earnings - so now we are talking about a combined metric. How about time to repay student loans at median earnings? This factors in the actual price of the degree and expected earnings, combining them into a number people can easily process: how long one will spend repaying loans, if the total cost is entirely borrowed. Twenty years repaying loans will be a sobering number for a lot of prospective students.

How is this data on earnings to be collected? Presumably, the federal government could track this through taxes (I would not trust the colleges too), although there are privacy issues with this tracking. The bigger statistical headache is people who are not working because of illness or injury, marriage, or graduate school. Skipping over these people could distort the earnings data considerably for some schools! This leads us to the other, biggest headache: different majors. Perhaps earnings data should be broken out by school by major or maybe by professional field people eventually go into: (1) math, computer science, natural science, and engineering; (2) business; (3) education, social work, and counseling; (4) humanities, journalism, and arts; (5) medicine; and (6) law. The school ratings might list years to repay full cost for graduates working in each of those fields. Job satisfaction and ability to find jobs in their desired field could be given a score too. There could be separate but similar questions for students who go to graduate school about whether they are happy with the graduate school they got into.

Social mobility / mission


Of course, the social mobility score is necessary to rate schools properly in different categories. And the federal government spends so much money on subsidizing loans, many loans should be going to schools that help students move up in society. It is worth it to report it explicitly. And it is worth reporting explicitly what fields graduates go into. Frankly, it is embarrassing how many Ivy League students go work on Wall Street. (Disclaimer: I went to an Ivy League undergraduate school.) If these students represent the best and the brightest, I would hope to see more in academic research, political leadership, social activism, education, and so on. It is not that surprising, but maybe seeing that fact given an explicit score, on a government webpage, will make some colleges recruit a little harder for people with an activist and not acquisitionist mindset. And it might make some qualified applicants who have no interest in consulting or finance think twice about going to an Ivy League school.

Of course, the federal government spends so much money on student loan subsidies that it could just decide to make every public university free. I think this might have a fascinating effect on the whole system judging from how university systems work in other countries. If the U.S. government changed direction 180 degrees and cut money for loans and grants and simply made public universities free, many highly qualified applicants - especially those from the middle class - would start to pick public universities over selective private schools. I do not believe that Ivy League schools would be hurt much, but other private schools would. They would see the quality of applicants and matriculants especially decline and their reputations suffer, while public universities would see theirs soar. In many countries where public universities are free, private schools have the weaker reputation. In the U.S., the Ivy League schools have so much money and prestige that their position is more or less secure, but flagship public universities and second-tier private schools might swap places in the reputation hierarchy.

An interesting take from Oliver Lee is to starve the system of money and let the most predatory schools collapse.

Conclusion


More or less, colleges have turned their selectivity into their competitive advantage: being hard to get in means their graduates must be desirable employees, and because employers seem to agree, the cycle is only reinforced as the next generation of students apply to elite schools in even greater numbers.

But basing reputation only selectivity is special kind of insanity. We do it only because education quality is hard to assess. There needs to be external verification to make the system fairer for students at every college, so that smart, hard-working students at any institution can get their due. While standardized tests are not perfect, they do help make college admission a bit fairer. Perhaps a dose of the same kind of medicine would help college graduates. I doubt college graduates will ever face a version of A.P. tests, but there is another option: digital learning badges. The idea is simple: any organization can serve as an external validator, certifying discrete skills that can be stacked into broader competencies. The source of the learning - selective college, community college, MOOC, self-taught, on-the-job learning - is irrelevant to the validator. Anyone can review the badge holder's work. If badges were to catch on, of course, graduates of selective schools would do well at acquiring them. But many students from less selective schools would do well, too. Maybe even people taking MOOCs would do well. Badges would have a profoundly democratizing, leveling force in college education because they provide a reliable source of data on what a person actually knows and has learned.

Postscript


In Dr. Pinker's article, I ran across this perfect description of what a good education should accomplish. My only reservation about badges is that it too many people might seek specific competencies and not a broad education as described below.

It seems to me that educated people should know something about the 13-billion-year prehistory of our species and the basic laws governing the physical and living world, including our bodies and brains. They should grasp the timeline of human history from the dawn of agriculture to the present. They should be exposed to the diversity of human cultures, and the major systems of belief and value with which they have made sense of their lives. They should know about the formative events in human history, including the blunders we can hope not to repeat. They should understand the principles behind democratic governance and the rule of law. They should know how to appreciate works of fiction and art as sources of aesthetic pleasure and as impetuses to reflect on the human condition.

On top of this knowledge, a liberal education should make certain habits of rationality second nature. Educated people should be able to express complex ideas in clear writing and speech. They should appreciate that objective knowledge is a precious commodity, and know how to distinguish vetted fact from superstition, rumor, and unexamined conventional wisdom. They should know how to reason logically and statistically, avoiding the fallacies and biases to which the untutored human mind is vulnerable. They should think causally rather than magically, and know what it takes to distinguish causation from correlation and coincidence. They should be acutely aware of human fallibility, most notably their own, and appreciate that people who disagree with them are not stupid or evil. Accordingly, they should appreciate the value of trying to change minds by persuasion rather than intimidation or demagoguery.

More about admissions here and here.