Home › Forums › All StateFansNation › ACC Bubble Update
- This topic has 95 replies, 27 voices, and was last updated 9 years, 8 months ago by TheCOWDOG.
-
AuthorPosts
-
02/17/2015 at 12:39 PM #74624VaWolf82Keymaster
The way RPI is calculated, the games you win and lose are exactly as important as the games your opponents’ opponents win and lose.
Not true at all. I’ve torn apart the RPI calks in the past. I may have to try and find that or do a new one.
but just last year SMU clearly should’ve been in.
SMU fell into the same pit that has swallowed many teams and coaches that insist on scheduling an OOC schedule that was an absolute joke. It happened to VT and Seth Greenburg several times, got Herb at ASU once, and got Penn State several years ago. It’s a repeatable phenomen that has been discussed here as well as by Jerry Palm (now at CBS Sports).
You can argue whether this is fair or not, but that is a different argument. Combine a poor OOC schedule with middle of conference regular season results and poor conference tourney performance and you will quite often end up in the NIT.
02/17/2015 at 12:46 PM #74626VaWolf82KeymasterMore on SMU from last year
According to Wellman, scheduling was the deciding factor between State as the last team in at 21-13 and SMU as the last team out at 23-9.
“In SMU’s case, their downfall, their weakness, was their schedule,” Wellman said. “Their non-conference strength of schedule was ranked 302nd. It was one of the worst non-conference strength of schedules. Their overall strength of schedule ranked 129. That would have been, by far, the worst at-large strength of schedule going into the tournament. The next worst at large strength of schedule was 91.”
Bottom line….pumping up your record by beating weak teams might impress the AP voters, but not the NCAAT Selection Committee.
02/17/2015 at 1:01 PM #74628MPParticipantNot a knock against this post, as I understand that RPI is a thing that the selection committee supposedly uses to make their decisions, but posts like these always remind me how absolutely bonkers it is that we’re still using RPI to talk about which teams deserve tourney berths. It’s such a terrible, meaningless statistic, and it boggles my mind that it wasn’t phased out years ago.
Yes. For perspective ESPN RPI has (5 loss) Kansas at #1… Over Kentucky! Maryland is a Top 10 team according to RPI… Barely Top 40 per Pomeroy.
But knowing or assuming that RPI is referenced by the committee, I enjoy these posts.
02/17/2015 at 1:21 PM #74629wufpup76KeymasterI mean, I can go digging back through tourney snubs, but just last year SMU clearly should’ve been in.
I can’t agree with this at all. Their non-con schedule was a complete joke by most any measure, and that is clearly defined as a huge factor when it comes to selection and seeding.
I think VaWolf’s posts above addressing SMU from last season are spot-on. We can argue about the RPI and it’s relevance / how much it does/should factor – but aside from minor nit-picking I think each selection committee generally does a great job every season.
The RPI gets so much attention because it’s one of the easily accessible indicators/predictors that is available. Indicator being the key word.
If SMU and State were close in RPI at the end of conference tournament play last season, then clearly the other factors such as non-con scheduling and qualitative – not quantitative – values of independent wins/losses were weighted more than RPI. Assign any number value you want to scheduling – the committee members can still take a look at who you actually played and beat plus where you played them in order to make a qualitative, non-numbers based determination.
SMU reaped what they sowed. I thought it was clear, and that the hand-wringing over SMU’s exclusion last season was done b/c people are always going to bitch about something. It’s kinda like Greenberg moaning about his various teams’ exclusions and the committee simply says ‘You know what you should do. Do better.”
02/17/2015 at 1:52 PM #74630xphoenix87ModeratorNot true at all. I’ve torn apart the RPI calks in the past. I may have to try and find that or do a new one.
RPI is 25% your record (with some [bad] adjustment for home/away since 2004), 50% your opponents’ record, and 25% your opponents’ opponents’ record. That’s how it’s calculated. So an individual win or loss for you has more effect than an individual win or loss by an opponent’s opponent (because there are so many more games that go into that 25%), but in cumulative, the record of your opponents’ opponents is given the same weight as your record.
SMU fell into the same pit that has swallowed many teams and coaches that insist on scheduling an OOC schedule that was an absolute joke. It happened to VT and Seth Greenburg several times, got Herb at ASU once, and got Penn State several years ago. It’s a repeatable phenomen that has been discussed here as well as by Jerry Palm (now at CBS Sports).
You can argue whether this is fair or not, but that is a different argument. Combine a poor OOC schedule with middle of conference regular season results and poor conference tourney performance and you will quite often end up in the NIT.
But that’s exactly the point. I’m arguing that the system is broken, and RPI is a large part of why that is, because it weighs strength of schedule so heavily and calculates it so poorly. I think everyone on the selection committee would tell you that they’re trying to select the best 68 teams (or however many it is when you take away automatic bids). That’s their goal, to get the 68 teams that have had the best seasons. By any reasonable measure, SMU was easily one of the best 68 teams. Not just barely, but definitely. In the upper half, in fact. If the system in place is keeping them out, the system is wrong.
Even that quote you referenced is referring to the SOS portion of the RPI. When you use RPI, SMU’s SOS was 129 (or, since all I can find now is end-of-year, post-tournament numbers, 135) according to RPI. Going by BPI, their SOS is 87, not far off from St. Louis (76), Tennessee (84), Creighton (81) and San Diego State (91), who all got at-large bids. Going by Kenpom, they were 93, again very close to teams like San Diego State and Cincinnati.
Again, just to be clear, I’m not attacking anyone and I don’t have a problem with these posts because RPI is still being used by the selection committee. I just think it’s absolutely crazy that they are still using it.
02/17/2015 at 2:23 PM #74631VaWolf82KeymasterSo an individual win or loss for you has more effect than an individual win or loss by an opponent’s opponent (because there are so many more games that go into that 25%), but in cumulative, the record of your opponents’ opponents is given the same weight as your record.
That is true, but is not as significant as you are making it because that same factor figures into every team’s RPI. So what matters is the delta from one team to the next. So what generates a delta?
– Better winning percentage
– Playing in a better conference (affects both opp’s WP and opp/opp WP)
– Playing better OOC opponentsWhen you are comparing teams from the power conferences, your team’s winning percentage will produce a bigger delta than the SOS. When you get similar winning percentages, then obviously SOS is generating the difference (Thus explaining the relatively large difference between State and Clemson.)
It has always appeared to me that the Selection Committee asks teams to prove that they are good by beating other good teams. Mid-majors like Gonzaga get that and schedule appropriately. Teams like SMU, and coaches like Herb and Seth Greenburg don’t and sometimes pay the consequences.
So the RPI and the Selection Committee reward teams for playing and beating good teams and penalizes those that don’t. Personally, I’m OK with that philosophy.
02/17/2015 at 2:53 PM #74633packalum44ParticipantIf we had a coach with talent commensurate to our stated goals we would all be spared the agony of this annual bubble blabber ritual.
My only ‘hope’ this March is that Archie isn’t lured away from Dayton.
02/17/2015 at 3:14 PM #74634xphoenix87ModeratorIf you don’t see a problem with a rating system that is arbitrarily weighted, has little predictive value, and doesn’t incorporate margin of victory, then I guess we’re done here.
So the RPI and the Selection Committee reward teams for playing and beating good teams and penalizes those that don’t. Personally, I’m OK with that philosophy.
That’s fine. My philosophy is that I’d like the committee to actually reward the best teams, which is what they’re supposed to do.
02/17/2015 at 3:47 PM #74635wufpup76KeymasterI’m arguing that the system is broken, and RPI is a large part of why that is, because it weighs strength of schedule so heavily and calculates it so poorly.
By any reasonable measure, SMU was easily one of the best 68 teams
Again, nothing is stopping the committee from making decisions NOT based on numbers. If SMU was denied mostly due to non-conference schedule, then let’s take a look at it without quantitative data: (source)
They went 10-2 in non-conference.
Wins:
TCU (“neutral” court – Dallas)
Rhode Island (home)
Texas State (home)
Arkansas Pine-Bluff (home)
Sam Houston State (home)
Texas A&M (neutral – Corpus Christi)
McNeese State (home)
UIC (away)
Texas Pan-American (home)
Wyoming (away)Losses:
Arkansas (away)
Virginia (neutral – Corpus Christi)Recap: They traveled out of Texas three times in 12 games (Arkansas (L), Wyoming (W), UIC (W)). The only above average teams they played against they lost – Virginia and Arkansas.
From this schedule and results last season what merits their inclusion? Combine this with bad losses in conference to South Florida and then Houston in the first round of the AAC tournament … the body of work is now able to be dismissed when held up to other teams and selection standards. If you’re going to have such a laughable non-conference schedule don’t lose to a conference bottom-feeder in the first round of your conference tournament.
Is their exclusion debatable? Sure. I do not agree at all that they should have been a shoo-in for selection though. Apparently neither did the committee. Numbers are not the be-all, end-all.
02/17/2015 at 4:43 PM #74636xphoenix87ModeratorAgain, nothing is stopping the committee from making decisions NOT based on numbers. If SMU was denied mostly due to non-conference schedule, then let’s take a look at it without quantitative data:
You’re entirely missing the point. Nobody is arguing that SMU’s non-conference schedule was good. Why does that matter more than their entire body of work? As I mentioned above, Wellman’s quote wasn’t just about their OOC SOS, it was that their whole season strength of schedule was so much weaker than anyone else in the field. As I pointed out, if you use better ranking systems to find SOS, it still comes in on the low end, but well within the range of many other at large teams. Their OOC SOS was bad, but not much worse than teams like Ohio State, Cincinnati, Iowa and Pitt when you use an actually competent rating system (again, remember that when you’re talking about the SOS component the NCAA references, you’re talking about SOS as calculated by the RPI formula).
The only above average teams they played against they lost – Virginia and Arkansas.
How are we defining “above-average” though? Tournament teams? Top 50/100 RPI? If so, you’re using the thing you’re arguing for to defend itself. In terms of national average, Wyoming is an “above-average” team that SMU beat by 8 on their home court. Rhode Island is an “above-average” team that SMU whipped by 30.
Also, why doesn’t SMU get credit for a close loss to Virginia at a neutral site (which, I’d wager, is better than the best win of several teams that made the field)? Why don’t they get credit for not only beating the average-to-bad teams that they played, but wiping the floor with them?
I’m not advocating that we should use only a computerized ranking system to select the teams. What I am arguing is that the ranking system we use should be better. Any of the systems out there, BPI, Sagarin, Kenpom, I don’t care which, all of them are not just better than RPI, they’re MUCH better.
In the case of SMU, it’s not that they were kept out because people wouldn’t look at the numbers. The committee said they passed the eye test, but when they looked at the RPI and the RPI-generated SOS, they weren’t good enough. The problem isn’t that they didn’t check the numbers, the problem is that they checked the wrong numbers.
02/17/2015 at 4:44 PM #74637choppack1ParticipantThis is actually a fun conversation.
Me – I would like to see that kenpom and/or sagarin is better than RPI. I think the RPI is used and quite frankly it’s the primary tool used by the committee…but you should present hard data to show that the “model” used should be replaced.
02/17/2015 at 5:11 PM #74638xphoenix87ModeratorThough I don’t entirely agree with the methodology he uses, this is a nice little article showing that at least a couple other systems outperform the RPI in predicting NCAA Tournament results
There’s also a really long article here at Basketball Prospectus that talks about the history of RPI, some of its weaknesses, and some of the ways that coaches can try to game it.
http://www.basketballprospectus.com/article.php?articleid=2451
Lastly, you don’t really even need a side-by-side test to see which system is better. RPI is ludicrous on a conceptual level. Why is strength of schedule worth 75% of a team’s rating? Because we said so, that’s why. Why is a home game suddenly worth twice as much if you lose it? Who the heck knows? Because we said so. Everything about it is arbitrary.
02/17/2015 at 5:33 PM #74639wufpup76KeymasterYou’re entirely missing the point
Well, I don’t necessarily feel that I am 🙂 . The argument that the numbers the committee utilizes is flawed is fine – but even if we substitute any of your given suggestions a team’s selection is still subjective.
Where your argument still uses numbers to compare and justify, I merely took numbers completely away from the decision process. The ‘eye test’ is subjective, but you still have actual on-court results to rely on. To me, it was a weak schedule with few things truly standing out that screamed ‘select me!’ – even if one feels the team passed any ‘eye test. I’m considering the entire body of work.
How are we defining “above-average” though? Tournament teams? Top 50/100 RPI? If so, you’re using the thing you’re arguing for to defend itself. In terms of national average, Wyoming is an “above-average” team that SMU beat by 8 on their home court. Rhode Island is an “above-average” team that SMU whipped by 30.
^No. No numbers. Eye test and results. As for non-con, there was a clear and distinct cut line between the quality of Virginia and Arkansas and the quality of the other teams played in the non-con schedule. Virginia was a 1 seed in the NCAA, Arkansas was in the NIT, Wyoming was a middling mid-major (18-15) that played in the CBI, and Rhode Island was 14-18. Neither Wyoming or Rhode Island was above average.
02/17/2015 at 5:46 PM #74640VaWolf82KeymasterIf you don’t see a problem with a rating system that is arbitrarily weighted, has little predictive value, and doesn’t incorporate margin of victory, then I guess we’re done here.
The RPI formula has been adjusted several times over the years, so “arbitrary” isn’t really accurate. You’ve mentioned several other formulas that you claim are better. While it’s obvious that they’re different, it’s not obvious that they are in fact better.
I don’t want a “formula” that claims to be predictive. The job of the Selection Committee is to evaluate what has already happened, not predict the future.
Using margin of victory is a double-edged sword as discovered during the BCS era. Plus there are many games where the final margin is not indicative of how close the game was for 39 minutes…then the fouling and missed 3-pt shots skew the final margin.
02/17/2015 at 6:40 PM #74641TexpackParticipantRPI is ONE component that the committee considers.
I saw references to “Body of Work”. Jay Bilas says every year that this is about “Who did you play and who did you beat?” I really like that description. The RPI attempts to quantify the RELATIVE strength of the Who’s. That’s all it really does. The committee relies on eye witness testimony from people who actually watch these teams during the year so the “eye test” is employed as well. The committee has been very open about what teams need to do to qualify. The only teams that can squeal in my view would be smaller schools that can’t get any P5 schools to play them. I’m not sure they really exist because if you are a pretty good smaller school, coaches like Gott will schedule you.
The other thing I would note is that EVERY bubble team has an issue or six. That is why they are on the bubble. If we don’t get in, we will need look no farther than ND, Wofford, and Clemson.
02/17/2015 at 7:49 PM #74643VaWolf82KeymasterAlso, why doesn’t SMU get credit for a close loss to Virginia at a neutral site
Missed this earlier.
How close do you have to be to be considered a close loss?
Is a two-point loss worth half of a one-pt loss?
Is a 3-pt loss worth one-third as much?
Does a bad loss offset a close loss?To me, a loss is a loss. I’m not into supporting moral victories…that show up as losses in the record book.
02/17/2015 at 8:03 PM #74644wufpup76KeymasterJust in case anyone misses / missed it – xphoenix has a post above which had gotten trapped in the spam filter. I didn’t notice it until just now.
The post has a couple of links for anyone interested …
General FYI – I think posts containing more than one hyperlink are tagged as spam. I’ll try to keep an eye for more posts falling into the spam filter in other threads.
02/17/2015 at 8:27 PM #74645pakfanistanParticipantJust in case anyone misses / missed it – xphoenix has a post above which had gotten trapped in the spam filter. I didn’t notice it until just now.
The post has a couple of links for anyone interested …
General FYI – I think posts containing more than one hyperlink are tagged as spam. I’ll try to keep an eye for more posts falling into the spam filter in other threads.
I’ve had posts with a single link get redirected to Davey Jones’s bit bucket. I don’t know why.
I just want people to have access to high quality, inexpensive, Chinese handbags :/
02/17/2015 at 8:54 PM #74646bill.onthebeachParticipant^Pup… dat Spam filter does NOT like TOO MANY CAPITAL LETTERS either…
#NCSU-North Carolina's #1 FOOTBALL school!02/17/2015 at 8:59 PM #74647RickKeymasterPackfanistan,
Before the gentler kinder Rick, you would have thought it was me 🙂02/17/2015 at 8:59 PM #74648RickKeymaster^Pup… dat Spam filter does NOT like TOO MANY CAPITAL LETTERS either…
Not too many uses of the word ‘Gott’
02/17/2015 at 9:12 PM #74649Tau837ParticipantThough I don’t entirely agree with the methodology he uses, this is a nice little article showing that at least a couple other systems outperform the RPI in predicting NCAA Tournament results
As has already been pointed out, the RPI isn’t designed to predict anything. So not sure it matters if other systems are better at that.
02/17/2015 at 9:19 PM #74650choppack1ParticipantHere is an interesting quote in the linked article.
The RPI rather tends to underrate teams from strong conferences and regions and to overrate teams from weak conferences and regionsAnd this is why its smart to schedule good teams in bad conferences… I also would be sure to play a UNCG and its various equivalents around the country on the road every year.
02/17/2015 at 9:24 PM #74651xphoenix87ModeratorThanks wuf, I was wondering why that wasn’t posting.
Where your argument still uses numbers to compare and justify, I merely took numbers completely away from the decision process. The ‘eye test’ is subjective, but you still have actual on-court results to rely on. To me, it was a weak schedule with few things truly standing out that screamed ‘select me!’ – even if one feels the team passed any ‘eye test. I’m considering the entire body of work.
But you’re still using numbers. You’re using win-loss records. I doubt you watched most of the games SMU played in their OOC schedule. I doubt you saw any games those teams played. You’re going by their W/L record, what you know about their conference, and the fact that their RPI is bad. But again, I’m not arguing that their OOC schedule was good. I’m arguing that they blew away most of it (which is what good teams do) and had a bunch of good games in conference, and their overall schedule wasn’t nearly as bad as RPI suggested it was.
The RPI formula has been adjusted several times over the years, so “arbitrary” isn’t really accurate. You’ve mentioned several other formulas that you claim are better. While it’s obvious that they’re different, it’s not obvious that they are in fact better.
It’s arbitrary because there’s no reasoning, either mathematical or practical, for the weights that things have been given, and there never has been, as I pointed out in my post above.
I don’t want a “formula” that claims to be predictive. The job of the Selection Committee is to evaluate what has already happened, not predict the future.
This is a line that the NCAA has often brought up, but it’s a complete straw man. What we’re trying to find is the best teams. The way you determine who is the best team is to see if they beat other teams. Putting aside matchup considerations (which none of these systems bother with anyway), saying “team X has played better than team Y” and “team X is likely to beat team Y” are exactly the same thing, only one is phrased descriptively and one is phrased predictively. If your system does a good job at figuring out how good teams are, then it will have predictive value.
Using margin of victory is a double-edged sword as discovered during the BCS era. Plus there are many games where the final margin is not indicative of how close the game was for 39 minutes…then the fouling and missed 3-pt shots skew the final margin.
Margin of victory is a better indicator of team quality than W/L record. This has been shown over and over again in studies from various sports and various skill levels. Over a large enough sample size, if we were to predict the results of college basketball games and you used only W/L record and I used only MoV, not only would I beat you, but it wouldn’t be particularly close. Are there individual games where MoV doesn’t indicate how close the game was? Sure, but I don’t actually care about individual games, I care about games in the aggregate. And even if that is true, it’s not an argument of not using MoV, it’s just an argument that MoV doesn’t tell you everything (which no one would ever assert). MoV still gives you way more information than W/L record. Also, if you’re really afraid of people running up the score (which is a seriously insignificant problem), you can add something to your formula which gives diminishing results for blowouts (as BPI does).
02/17/2015 at 9:27 PM #74652xphoenix87ModeratorRPI is ONE component that the committee considers.
This is true, but as people have consistently demonstrated, and as this very post is a perfect example of, you can pretty reliably project the NCAA field using only RPI data, which points to it being not just a piece, but a big piece of what the selection committee is doing.
I’ll quote a passage from this terrific article by Nate Silver from back when the NCAA held a mock selection committee back in 2012:
Over the long run, R.P.I. has predicted the outcome of N.C.A.A. games more poorly than almost any other system. And it shows some especially implausible results this season. Southern Mississippi, for instance, was somehow ranked ahead of Missouri, even though it has endured seven losses to Missouri’s four (some of them against middling teams like Houston, Texas-El Paso, Alabama-Birmingham and Denver).
The committee’s use of R.P.I. is not quite as obsessive as you might think: more advanced systems like those developed by Ken Pomeroy and Jeff Sagarin were just a mouse click away, they told us — and it was perfectly well within the rules to look at them. The discussion of each team, moreover, was exceptionally thorough. It was clear from the officials we met that the committee has plenty of basketball knowledge and cares passionately about getting things right.
But R.P.I.’s fingerprints were all over the process. When a computer monitor displayed the teams that we were considering for the bubble, the R.P.I. ranking was listed suggestively alongside them. The color-coded “nitty gritty” worksheets that the committee has developed, and which often frame the discussion about the bubble teams, use the R.P.I. rankings to sort out the good wins and the bad losses.
-
AuthorPosts
- You must be logged in to reply to this topic.