Do you believe that we can scientifically quantify what it means to be moral? Harris believes so, and says that we can find a way to maximize human well-being through scientific measures. His claim rests on the idea that increasing human well being should be the goal of morality. He defends this by stating that anything that we should care about related to morality is something that actually effects our well being. In other words, while some moral systems reference abstract values, he believes that the only thing we should really define as moral is something that increases human well being.
I imagine a counter example.
What if scientists could invent a machine where users would be given unlimited pleasure and their well being would be taken care of to the utmost. This machine has no drawbacks in that there is no "hangover" from leaving, and it can faithfully simulate the utmost pleasures of real life. This would not be a matrix-like machine, rather we would all be conscious of our participation, and the effects would be just as good as any other source of pleasure/wellbeing. A thin layer of professionals might be required to keep these machines running, but in their off hours they too would be hooked up to the machine. It seems that under Harris's framework, it would be moral to hook us all up to this machine.
Of course, I am writing this because I have an intuition that this is incorrect. There must be some other factor at play other than simply an increase to all human wellbeing. Is it our freedom to actually create worse consequences, to actually lower human wellbeing that in a roundabout way is really what we mean when we think of morality? I hope discussion leads to more ideas here, as I cant carry this train of thought forwards at this time without further reflection and discussion.