Saturation of the input transducer is a frequently occurring difficulty with minimum-variance controllers and results in the variance being significantly greater than in the unconstrained case. Therefore, it is desirable to decrease the likelihood of saturation. In this letter, this is achieved by deriving a control law which minimises the variance over a time interval.