There is a clear need to involve patients in medical decisions. However, cognitive psychological research has highlighted the cognitive limitations of humans with respect to 1. Probabilistic assessment of the patient state and of potential outcomes of various decisions, 2. Elicitation of the patient utility function, and 3. Integration of the probabilistic knowledge and of patient preferences to determine the optimal strategy. Therefore, without adequate computational support, current shared decision models have severe ethical deficiencies. An informed consent model unfairly transfers the responsibility to a patient who does not have the necessary knowledge, nor the integration capability. A paternalistic model endows with exaggerated power a physician who might not be aware of the patient preferences, is prone to multiple cognitive biases, and whose computational integration capability is bounded. Recent progress in Artificial Intelligence suggests adding a third agent: a computer, in all deliberative medical decisions: Non emergency medical decisions in which more than one alternative exists, the patient preferences can be elicited, the therapeutic alternatives might be influenced by these preferences, medical knowledge exists regarding the likelihood of the decision outcomes, and there is sufficient decision time. Ethical physicians should exploit computational decision support technologies, neither making the decisions solely on their own, nor shirking their duty and shifting the responsibility to patients in the name of informed consent. The resulting three way (patient, care provider, computer) human machine model that we suggest emphasizes the patient preferences, the physician knowledge, and the computational integration of both aspects, does not diminish the physician role, but rather brings out the best in human and machine.