It is often claimed that in response to certain types of evidence, we are rationally required to adopt belief states that are, in a sense, “mushy.” These belief states are thought to be well represented by imprecise probabilities: sets of probability functions, rather than single ones. Imprecise probabilities, or ranges of probabilities (like 60-80%) have also been advocated in the reporting of scientific information to policy-makers, and could be potentially relevant in the design of artificial intelligence systems. The question at the center of this talk is: what might imprecise probabilities do for us? In other work I’ve argued that we cannot expect being imprecise to lead to a more accurate representation of the world. Here I focus on the following question: should we expect the adoption of imprecise probabilities, (rather, than, for example, an arbitrarily chosen precise probability function from the set) to deliver any benefits when it comes to decision making? I will argue that we should not. If we can’t expect being imprecise to help us make better decisions, we need to rethink whether there are good reasons to use imprecise probabilities in contexts in which good decision making is what’s of primary concern.