Maybe it would help if we could agree what intelligence, awareness, or consciousness mean. In the absence of agreement, we fall back on the Turing Test. We'll have AI when we can't tell an AI from a human on the other end of a comm line. Like beauty and obscenity, intelligence is in the eye of the beholder.
Perhaps our anthropomorphism is the problem. As the hero in my book Fools' Experiments opines of the Turing Test:
What kind of criterion was that? Human languages were morasses of homonyms and synonyms, dialects and slang, moods and cases and irregular verbs. Human language shifted over time, often for no better reason than that people could not be bothered to enunciate. "I could care less" and "I couldn’t care less" somehow meant the same thing. If researchers weren't so anthropomorphic in their thinking, maybe the world would have AI. Any reasoning creature would take one look at natural language and question human intelligence.I don't think AI is a trope -- just that we need to approach the problem a different way.