Assistive robotic technology is increasingly employed in many industries including health care. One of the most important features of this assistive technology is its autonomous verbal communication skill. We propose a new theory for autonomous agent based on the five human senses. Then we proceed to address one of the five senses, the speech. Our approach to address and develop an autonomous verbal communication is to apply deep learning to learn about different topics in healthcare. We developed a novel approach where we created a set of question-answer dataset from articles and interviews with physician specialist from U.S. National Public Radio (NPR). We trained a deep learning model which is able to listen to conversations between a patient and a physician to answer to the questions when the physician is not able to answer or it might not answer the question completely. We discuss the corpus on Health Science which shows what NPR can teach to machine. We share the usage of the Corpus to train a deep learning model to be used by pepper which is a humanoid robot that can be implemented in helping provide elderly care in individuals diagnosed with early stages of dementia.