Lance, Luca and Oded have all weighed in on the matter of rejection letters and the proper frame of mind one views them with. Oded makes the important point about such things being about the allocation of scarce resources, rather than the assigning of value to a paper (or a candidate).
This alternate "frame" is crucial to understanding some of the more bizarre quirks in our conference review process. Theory conferences are (in)famous for short to non-existent comments to authors; it is not uncommon in certain other areas to gget pages and pages of comments back. However, you quickly learns that pages and pages of comments are usually as useless as a blank page in trying to determine why your opus didn't make it, while other piecees of %##$^&$ did.
Even the usual process of assigning scores is not useful, since the score indicates some absolute figure of merit, whereas many papers live or die based solely on the particular mix of submissions. In some conferences, it's even worse: you are required to give a paper a rating of the form "strong/weak accept/reject or neutral", essentially passing judgement on a paper in vacuo, as it were.
Once we accept (or realize) that conferences (especially nowadays) are about the allocation of scarce resources (speaking slots, or even 10 pages in the proceedings if you still insist on paper proceedings), then it becomes clear that trying to assign absolute scores and verdicts to papers without being able to compare with others is meaningless.
As a corollary, it means that any review process where reviewers don't get to see a large fraction of the papers is flawed. It also means that fetishizing the "feedback to authors" is also misleading. As Oded points out, this only gives an illusion of a value judgement where none can really be had.
It also has more radical implications for the way we design conference committees, (number of people, whether PC-submission is allowed, who gets to review what). But that's a tale for another post...