How many Data Center Experts are confident, but wrong?

I missed Daniel Kahneman speaking in Seattle by a day, so I am going back and looking at videos and articles.  The Seattletimes has an article on his talk.

Exploring how we truly think

I had a feeling Daniel Kahneman was going to be interesting. My gut was right, but it isn't always, and that was the point of his talk. A lot of our thinking is messed up, but we don't know it unless we slow down and examine what our brains are doing. That's not easy to do. Kahneman is a Princeton psychologist (emeritus), who won the 2002 Nobel Prize in economics for pioneering work showing that people don't always make rational financial decisions.

Seattle Times staff columnist

I had a feeling Daniel Kahneman was going to be interesting.

My gut was right, but it isn't always, and that was the point of his talk. A lot of our thinking is messed up, but we don't know it unless we slow down and examine what our brains are doing.

That's not easy to do.

Kahneman is a Princeton psychologist (emeritus), who won the 2002 Nobel Prize in economics for pioneering work showing that people don't always make rational financial decisions.

Economists thought we did, that we weighed the facts and acted in our own best interests, but people are more complicated than that.

How familiar does this sound to a potential problem that gets swept under the rug?

When he was a young psychologist, Kahneman was put in charge of evaluating officer candidates in the Israeli Defense Force. He and his team put candidates through an exercise and saw immediately who was a leader, who was lazy, who was a team player and so on.

Much later, they got data from the soldiers' actual performance, and it turned out his team's predictions were all wrong. The experts were absolutely confident, but wrong.

Even experts take a bit of information and believe it can predict more about a person than is possible.

System 1, he said, "is a machine for jumping to conclusions." System 2 is supposed to monitor System 1 and make corrections, but System 2 is lazy.

We think we are actively evaluating then acting, but most of the time we act on unexamined input from System 1.

What is an example, think about the assumptions made on hardware purchases and how often do people go back and evaluate the true performance of the hardware after deployment?