You can see that there is significant predictive efficiency gained in going from 2 to 3 or 3 to 4 stars. For example, there are only about a third more 2 star recruits than 3 star recruits, but the latter are almost 10x as likely to be named All-American. On the other hand, the distinction between 4 and 5 star recruits seems a bit iffy to me. Only about 1 in 10 players at or above the 4 star level are designated 5 star, with the latter distinction raising their All-America odds by a bit less than 3. There's information there, but you pay a high price in selectivity.
It's not surprising that this system works. Coaches are heavily incentivized to win, and some ratings are provided by professionals who make their living scouting HS talent. But of course prediction is imperfect: there are plenty of 3 and 4 star recruits who outperform 5 stars. The issue is whether expected return from using the predictions is positive...
University admissions committees should be able to produce an analysis of this quality or better. If the objective function to be optimized is performance at the university, the data is readily available (see below). But one could also use criteria such as eventual net worth (major donor status), fame, notable achievements, etc. to assess quality of admissions decisions. Lots of assertions are made in this context (see comments on this NYTimes article), such as that high test scores are negatively correlated with leadership or interpersonal skills that impact later life. This might be true, but I've yet to find any careful analysis of the claim.
How much can we enhance odds ratios for becoming a millionaire / billionaire / STEM PhD / Nobel laureate by proper filtering of applicants?
See Data mining the university and Nonlinear psychometric thresholds for physics and mathematics for predictors of college performance by major as a function of HS GPA and SAT.