Jump to content

2015-16 Preseason Ratings


kwyjibo

Recommended Posts

Pomeroy's ratings are out (SLU is at 153, 11th in A-10): http://kenpom.com/

Matchup Zone is predicting a much better season for SLU, this would be a CBI type season (SLU is at 112th, 9th in A-10): http://www.matchup-zone.com/projections/2016/conferences/A10

Pomeroy ratings uses projections of individual performances but factors in a lot of team factors (including coach). The Matchup Zone method is based on individual player projections. Matchup Zone uses previous year work and also uses recruiting info to project freshman (he projects Neufeld pretty highly in terms of skill and defense but his model shows him getting 2.7 mins a game). The model seems reasonable but he also projects Will Culliton (the new "Matt Dickey") at 10 minutes a game--which he may get but for Army (see: http://www.billikens.com/forum/index.php?showtopic=24415&page=19).

These types of predictions are not that accurate overall but Pomeroy notes that the average error for conference wins is 2.26 for his work and a similar number for Matchup Zone. Pomeroy also has noted that Hanner (SI) and TeamRankings do a little bit better than his model. I will post those when they become available.

Link to comment
Share on other sites

I just saw that while Hanner/Winn (SI) overall ratings are not out yet, the A-10 conference preview is out. They do not like the Billikens and they are projected last in the A-10.

http://www.si.com/college-basketball/2015/10/16/atlantic-10-preview-rhode-island-davidson-vcu

There is no excuse for us to fall this low immediately following an unprecedented era of success for this program.

Again, Crews did an excellent job of keeping the kids focused while Majerus was sidelined and later passed away.

However, how much more evidence to do we need before we admit he is not the right guy for this job at this time????? We had an opportunity to seize some momentum and move towards being another Gonzaga or Xavier. Instead we are back to being a program that the vast majority of alums are indifferent about much less the general public.

Link to comment
Share on other sites

Predicted LAST in the A10!?? Wow that is just flat out depressing. Holy cow! My how the mighty have fallen.

I have no words for how low my expectations are right now, for this season--and how much apathy I feel. I really do not know what to expect, other than constant and continual losses. These prognostications are just embarassing. And I just could not even summon the interest to drive 2 miles over for the Billiken Bash.

I must say: I think I have been about as supportive, over 17 years, as anyone. But this just really, flat out, sucks. What's worse, I have no reasons, nor any "feelings" or "intuitions" -- that these predictions are wrong. 5 wins in conference?? Sadly, it sounds about right, this sunny morning--if we overachieve. That would mean doubling last year's total--almost. Plus: we now have 5 less seconds to stand around in an arc 20-25 feet from the rack, and pass back and forth hoping something will open up...

Sorry, guys, please correct me if I am missing something. I usually don't. Over and out.

Link to comment
Share on other sites

I agree-- they either prove otherwise, or they don't, on the court. Don't worry I'll watch. I've already seen a great deal of the guys playing and practicing. But again -- very, very low expectations.

ps: sky may actually be falling, sometimes those worry warts are actually correct!

DoctorB, the people who think the sky is falling will continue to think that until the kids prove otherwise on the court. It doesn't matter what I or anyone else says at this point. You've only got to wait another three weeks or so before you can see for yourself.

Link to comment
Share on other sites

Please note that while the SI player-projection based prediction is for last, someone else (Matchup Zone) who uses a similar method but adds freshman projections thinks SLU will be 9th in A-10 and CBI worthy.

I will be watching and have tickets for the Louisville game.

Link to comment
Share on other sites

It would be way too much work, kwyjibo, but wouldn't it be interesting to pull a dataset of predictions vs. reality over about 10 years and see if, cumulatively, the top ten or so predictors have any statistically significant success across complete leagues, and then do the same for individual predictors?

(Actually, that would be a pretty cool end-of-semester paper for a graduate statistics or econometrics class. I remember one of my classmates did an analysis of home-court advantage back when we had to use slide rules for the regression analyses. ;) )

Link to comment
Share on other sites

Predicted LAST in the A10!?? Wow that is just flat out depressing. Holy cow! My how the mighty have fallen.

I have no words for how low my expectations are right now, for this season--and how much apathy I feel. I really do not know what to expect, other than constant and continual losses. These prognostications are just embarassing. And I just could not even summon the interest to drive 2 miles over for the Billiken Bash.

I must say: I think I have been about as supportive, over 17 years, as anyone. But this just really, flat out, sucks. What's worse, I have no reasons, nor any "feelings" or "intuitions" -- that these predictions are wrong. 5 wins in conference?? Sadly, it sounds about right, this sunny morning--if we overachieve. That would mean doubling last year's total--almost. Plus: we now have 5 less seconds to stand around in an arc 20-25 feet from the rack, and pass back and forth hoping something will open up...

Sorry, guys, please correct me if I am missing something. I usually don't. Over and out.

I've never seen Doc this down on a SLU team. When doc is down, there's reason to worry.
Link to comment
Share on other sites

It would be way too much work, kwyjibo, but wouldn't it be interesting to pull a dataset of predictions vs. reality over about 10 years and see if, cumulatively, the top ten or so predictors have any statistically significant success across complete leagues, and then do the same for individual predictors?

(Actually, that would be a pretty cool end-of-semester paper for a graduate statistics or econometrics class. I remember one of my classmates did an analysis of home-court advantage back when we had to use slide rules for the regression analyses. ;) )

I am always interested in the accuracy of any type of prediction. As I mentioned earlier Pomeroy has been tracking his own preseason prediction accuracy for a few years and shows that on average they are 2 to 2.5 conference wins off (he uses conference wins for reasons I do not recall). It is important to note that it is not that accurate but way better than guesswork. It is also important to note that with a data driven system things can go wrong with the wrong data (Will Culliton problem above).

I think you might be asking if the growing trend of "individual-based" (Pomeroy is a team based calculation but the team stats are modified using player changes so it is kind of a hybrid) statistical predictions are doing better than "expert"/team/navel-gazing/Joel Welser predictions and again I will rely on Pomeroy to say yes--at least for top teams. Pomeroy developed his individual/team hybrid prediction system a few years ago precisely to beat the AP or Coaches Pre-Season poll. He has shown that while the AP/Coaches polls do better than he thought they would, he usually can do better. Hanner and TeamRankings last year did better than Pomeroy although Pomeroy notes that a consensus across the statistical models is the best predictor of all.

Here is a nice chart made with each preseason predictors top 30 and what the NCAA tourney seed was (this is 13-14 season):

preseason2014.png

Link to comment
Share on other sites

Guys, I hope I'm wrong, I really and truly do. Anyone who knows me will attest to that: I'm a homer. But I'm amazed that we are now perceived as LAST in this mid-level conference with about 2-3 fairly decent teams and 8 or so pretty beatable ones.

Maybe someone can cheer me up? I prefer not going into any particulars on this board.... in general, I like the talent and the guys as individuals. Talent = decent, though nothing jumps off the chart right now. But somehow, I had more hope prior to that long, long year when Anthony was just about the best player we had. Let that sink in a moment. And I truly had much hope last year -- which got shot all to hell, by mid-January.

But what is it, exactly? My ears are open. Maybe I've just gotten tired of it all . . . . .

ps: tickets for Louisville?? yeah I'll watch it --most likely -- but that ESPN expose literally made me sick to my stomach. That may be part of my lackadaisical attitude as well. The good guys, one of which is Coach Crews and Calbert, etc., find it hard to recruit against that sort of nonsense.

I've never seen Doc this down on a SLU team. When doc is down, there's reason to worry.

Link to comment
Share on other sites

This thread has 3 predictions in it, Pomeroy, Matchup Zone, and SI and the first two are relatively optimistic given last year's season. Yet people read the third one (SI) and jump off a cliff as if the season has already played itself out and SI had a perfect crystal ball.

This season can't get here soon enough.

Link to comment
Share on other sites

I've never seen Doc this down on a SLU team. When doc is down, there's reason to worry.

Doc was also really optimistic prior to last season, so maybe we should look at his pessimism this year as a good sign.

Link to comment
Share on other sites

Didn't Pomeroy have us ranked in the low 90s heading into last season?

He forecasted SLU at 81st last year. His method is probably more team and history contingent than I mentioned earlier. So it would have been hard for his model to rate SLU much worse given SLU's excellent rating the year before.

Link to comment
Share on other sites

He has shown that while the AP/Coaches polls do better than he thought they would, he usually can do better. Hanner and TeamRankings last year did better than Pomeroy although Pomeroy notes that a consensus across the statistical models is the best predictor of all.

Here is a nice chart made with each preseason predictors top 30 and what the NCAA tourney seed was (this is 13-14 season):

preseason2014.png

I love the chart -- I think one interesting addition would be how far the team went into the tournament (I'd probably state it as an exponent of 2 -- 0 is the champion and 6 is the [real] first round).

As for the questions I was asking, they were:

- How accurate are the best-known predictors (Pomeroy, Coaches' Poll, whatever -- regardless of the system they use)?

- Cumulatively, how accurate are those predictors?

You've provided some answers to that in the second of your paragraphs that I cited above. And the chart seems to indicate that the various predictors have an overall high degree of accuracy in predicting who the top 30 teams will be going into the tournament.

What I'd really like to get to, however -- and this is one for grad students or some professor's abused RA -- is some sort of evaluation of the more comprehensive predictions. For example, something like: If Hanner/Winn predicts a team to finish 14th, what is the historical probability that the team will finish 11th or lower? Or, in general, how accurate (and I don't have an immediate measure for "accuracy" are the various predictors' league predictions)?

Link to comment
Share on other sites

Plus: we now have 5 less seconds to stand around in an arc 20-25 feet from the rack, and pass back and forth hoping something will open up...

The other day in the P-D Crews promised the offense would be much less "structured" this year. I defy anyone who watched any games last year to conclude that the problem with that offense was "too much structure," either in terms of set plays or substitution patterns.

Link to comment
Share on other sites

I love the chart -- I think one interesting addition would be how far the team went into the tournament (I'd probably state it as an exponent of 2 -- 0 is the champion and 6 is the [real] first round).

As for the questions I was asking, they were:

- How accurate are the best-known predictors (Pomeroy, Coaches' Poll, whatever -- regardless of the system they use)?

- Cumulatively, how accurate are those predictors?

What I'd really like to get to, however -- and this is one for grad students or some professor's abused RA -- is some sort of evaluation of the more comprehensive predictions. For example, something like: If Hanner/Winn predicts a team to finish 14th, what is the historical probability that the team will finish 11th or lower? Or, in general, how accurate (and I don't have an immediate measure for "accuracy" are the various predictors' league predictions)?

Bonwich, you really should not talk about "accuracy" when you talk about predictions of future outcomes. You have to keep in mind that what the models attempt to do is to artificially create a simple system that can provide answers for future events in very complex real systems. College basketball, D1, taken from this point of view is a very complex real system with hundreds of teams, players, and issues. From a modelling standpoint the complexity of any such real system is dependent upon the number of significant variables that affect its outcomes in one way or the other. As I mentioned, in D1 basketball, you have over a hundred teams, each with coaches, assistant coaches, a budget, and players, you also have fans and supporters. All of these represent individual variables with different weights in the way they affect the outcomes of a team. On top of that you have issues like health, physical ability, agility, injury, emotional states (family and personal issues), etc. The number of possible outcomes in such complex systems can be calculated as the factorial of the significant variables. Of course, weightings have to be assigned to each of the variables making the problem even more complex, but let's limit the number of possible outcomes to the factorial of the variables for the purposes of this post.

So, let me give you a concrete example of what is involved. Let's limit the analysis to a single team, let's limit this team to just 13 players and two coaches. Let's eliminate all other factors out of this brief example. For the purposes of this example lets assume all 15 variables are weighted equally. The number of possible outcomes in a complex system with 15 equally weighted variables or factors (13 players, two coaches) is 15 factorial (15!) which is 1,307,674,368,000 possible outcomes. When you add all the other teams into the brew, and add all of the other factors involved from state of health, to injuries, to budgets, etc... the number of possible outcomes rises to astronomical levels. In other words to model this complex real system, D1 college basketball, and get statistically accurate and reproducible predictions of future outcomes is just not possible. Keep in mind that this example is an artificially limited complex real system. To be accurate specific in real life weightings must be applied to the 15 variables mentioned as well including any number of additional variables some of which may be of major importance, others not so. The number of possible future outcomes (representing the complexity of the real system) goes well beyond the number provided above. And this is just one single team taken as a complex real system and analyzed as such.

What I mean to state CLEARLY is that ALL MODELS are approximations, not one of them is capable of providing accurate or reporducible determinations of future outcomes in a statistically significant manner. What models do is to approximate complex real systems. All models depend on informed guesses by whoever is doing the model. The weight assigned to specific factors is generally an expert guess, and so on and so forth. Some guesses are invariably better than others but they are still guesses. That said it is clear that some approximations are better than others, some models can be better correlated to reality than others are. It is difficult to calculate the level of correlation because, as already stated, the ability of the models to predict outcomes it is not fully reproducible and will vary from year to year. This, I am afraid, is an accurate description of the "accuracy" of models.

Now, I am certain that someone will say Pomerol is accurate, or someone else is better. These statements are fine as value judgments go and may indeed document a high degree of correlation between a specific model predictions and the actual outcomes that took place in a given year. This is to be expected, and every model has some degree of correlation to reality, but the degree of correlation with real future outcomes shown by a model will vary from year to year, sometimes more sometimes less. Again, models do not accurately or reproducibly predict future outcomes. Accurate prediction of future outcomes for complex real systems with multiple variables is mathematically beyond the capability of models as explained above.

One last point, if Hanner/Winn predicts a team will finish 14th and historically 50% of the teams predicted to finish in 14th place end the year at 14th or less, what do you really know about the future accuracy of their predictions? Can you bet on this prediction?

Link to comment
Share on other sites

Bonwich, you really should not talk about "accuracy" when you talk about predictions of future outcomes. You have to keep in mind that what the models attempt to do is to artificially create a simple system that can provide answers for future events in very complex real systems. College basketball, D1, taken from this point of view is a very complex real system with hundreds of teams, players, and issues. From a modelling standpoint the complexity of any such real system is dependent upon the number of significant variables that affect its outcomes in one way or the other. As I mentioned, in D1 basketball, you have over a hundred teams, each with coaches, assistant coaches, a budget, and players, you also have fans and supporters. All of these represent individual variables with different weights in the way they affect the outcomes of a team. On top of that you have issues like health, physical ability, agility, injury, emotional states (family and personal issues), etc. The number of possible outcomes in such complex systems can be calculated as the factorial of the significant variables. Of course, weightings have to be assigned to each of the variables making the problem even more complex, but let's limit the number of possible outcomes to the factorial of the variables for the purposes of this post.

So, let me give you a concrete example of what is involved. Let's limit the analysis to a single team, let's limit this team to just 13 players and two coaches. Let's eliminate all other factors out of this brief example. For the purposes of this example lets assume all 15 variables are weighted equally. The number of possible outcomes in a complex system with 15 equally weighted variables or factors (13 players, two coaches) is 15 factorial (15!) which is 1,307,674,368,000 possible outcomes. When you add all the other teams into the brew, and add all of the other factors involved from state of health, to injuries, to budgets, etc... the number of possible outcomes rises to astronomical levels. In other words to model this complex real system, D1 college basketball, and get statistically accurate and reproducible predictions of future outcomes is just not possible. Keep in mind that this example is an artificially limited complex real system. To be accurate specific in real life weightings must be applied to the 15 variables mentioned as well including any number of additional variables some of which may be of major importance, others not so. The number of possible future outcomes (representing the complexity of the real system) goes well beyond the number provided above. And this is just one single team taken as a complex real system and analyzed as such.

What I mean to state CLEARLY is that ALL MODELS are approximations, not one of them is capable of providing accurate or reporducible determinations of future outcomes in a statistically significant manner. What models do is to approximate complex real systems. All models depend on informed guesses by whoever is doing the model. The weight assigned to specific factors is generally an expert guess, and so on and so forth. Some guesses are invariably better than others but they are still guesses. That said it is clear that some approximations are better than others, some models can be better correlated to reality than others are. It is difficult to calculate the level of correlation because, as already stated, the ability of the models to predict outcomes it is not fully reproducible and will vary from year to year. This, I am afraid, is an accurate description of the "accuracy" of models.

Now, I am certain that someone will say Pomerol is accurate, or someone else is better. These statements are fine as value judgments go and may indeed document a high degree of correlation between a specific model predictions and the actual outcomes that took place in a given year. This is to be expected, and every model has some degree of correlation to reality, but the degree of correlation with real future outcomes shown by a model will vary from year to year, sometimes more sometimes less. Again, models do not accurately or reproducibly predict future outcomes. Accurate prediction of future outcomes for complex real systems with multiple variables is mathematically beyond the capability of models as explained above.

One last point, if Hanner/Winn predicts a team will finish 14th and historically 50% of the teams predicted to finish in 14th place end the year at 14th or less, what do you really know about the future accuracy of their predictions? Can you bet on this prediction?

That's great, but you've made the model infinitely more complex that it really is in this case.

Simplified, for a single predictor (e.g. Hanner/Winn): You take, say, ten years' historical data of its predictions. You plot, more or less, where H/W predicted a team would finish against against where it actually finished. Maybe you do that by a single league, maybe you do it by the whole universe of D1 teams.

On your last point, no, that would be the equivalent of betting on a coin toss. (Not to mention I was thinking of the A-10, so the test would be something like if a team was predicted to finish 14th and it actually finished, say, 10th through 14th.)

I submit that it wouldn't be all that hard (with a graduate slave :) ) to figure out whether a given prediction system is, historically, very accurate, reasonably accurate, 50/50, or totally full of sh!t.

And I prefer Petrus, but Certain de May and La Conseillante are about as much as I can afford.

Link to comment
Share on other sites

What were Pomeroy's predictions for Davidson, Rhode Island, and George Washington last year, out of curiosity?

Courtesy of a year-old post from Taj:

To be fair, I feel a need to note this from a Ben W. tweet: KenPom has published his 2014-15 preseason statistical rankings and we come in at #81.

The rest: VCU @ #17; Dayton @ #50; Richmond @ #51; George Washington @ #62; Umass @ #74; St. Joe's @ #90; La Salle @ #101; Rhode Island @ #105; the Bonnies @ #107; Duquesne @ #124; Masona @ #138; Davidson @ #139; and Fordham @ #154.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...