As we continue our discussion based on Blake Lemoine’s assertion that the Large Language Model chatbot LaMDA had become sentient, we relay the rest of his conversation with the program and then some questions and answers with Lemoine himself. But as Lemoine has said, machine sentience and personhood are just some of many questions to be considered. His greater issue is how an omnipresent AI, trained on an insufficient data set, will affect how different people and cultures interact and who will be dominated or excluded. The fear is that the ultimate result of protecting corporate profits will outweigh global human interests. In light of these questions about AI’s ethical and efficient development, we highlight the positions and insights of experts on the state and future of AI, such as Blaise Agüera y Arcas and Gary Marcus.