I'm an engineer so my job is to understand the tradeoffs inherent in designing a device to suit a purpose. Utilitarianism appealed to me because it treats all ethical decisions as tradeoffs. I was trying to come up with a definition of tradeoffs and concluded that a tradeoff is any attempt to compare incomparables : e.g., maximum speed versus average power consumption, or cash on hand now versus a distribution of returns later. There is no natural ordering of such values, so engineers fall back on (hopefully) their client's utility function : Q(low max speed, low power consumption) > Q(high max speed, high power consumption) is meaningful.
Since tradeoffs can only be made in the presence of a utility function, utilitarianism requires a global utility function. I started to think about the properties such a function would have. For it to be useful in making moral decisions, it must be computable or at least approximable in a time that is sublinear to the number of inputs given that utilitarianism says that everything is potentially an input -- potentially the entirety of the moral actor's past light cone. It must also be able to concretely balance short term gains against long term gains unless one is to forever be sacrificing present good for some distant utopia.
Then I realized that my client's utility function is part of the dynamic system that describes them -- it might be better for my client that they change their utility function than that I really remove seatbelts to save weight. This is not a problem for an engineer since it is my job to understand what the client wants, not worry about their mental health (within reason). But utilitarianism equates "good" and "utility" so I can't make conative assumptions ; if I did, the global utility function would just be a fancy name for a personal god for which I have no evidence. This is not fatal to a global utility function since many functions have fixed points, but it does eliminate many classes of simple functions so there is reason to be skeptical that there exists an approximation that requires only a very small portion of the inputs to the global utility function.
These criteria (approximability, tractability, and horizon-independence) are not trivial so I can't just assert that such a function exists in the same way that any other mental abstraction exists. Unless I have positive reason for believing such a function exists and that I can apply it to solve moral questions, then I have no business calling myself a utilitarian.
So for now, I'm just an ex-utilitarian who thinks that the techniques engineers use to optimize designs can be useful in thinking about moral priorities, but that there are probably other principled ways to the same end.