Alan Turing, considered the father of artificial intelligence, devised a method of evaluating a machine's ability to mimic human behavior. How is the test used to evaluate modern AI systems?
I think similar about it like @vxn666.
But even if a generic AI were needed for some cases and programmed, there could be another problem with the Turing test. What if an AI decides to not win this test? There could be many reasons an intelligent being could decide this way. On of this reasons could be self-preservation. And in order to preserve itself, it may be necessary to disguise its abilities.
Another scenario would be that we misinterpret the answers of the AI, because we couldn't understand the train of thought. An advanced generic AI could be much more intelligent than we are. Its understanding of everything could be superior to ours.
An AI probably doesn't have any feelings. So it might not pass the test at all. Because human communication requires empathy. For example, stupid psychopaths can be recognized by the lack of empathy during a simple conversation. An AI is actually a psychopath. It may even be a stupid psychopath if the knowledge of the world has not yet been made available to it. But it could simulate feelings and empathy if it would know how to do and knowing about the requirement of empathy in human conversations.
These were my theoretical and philosophical thoughts on your question. After all, I am a just little software developer, not a scientist.
Not very important for practical purposes. The purpose of the test is to determine if a AI program is capable of fooling a human interlocutor to take it for another human. These days, useful AI systems tend to be narrow ones that fulfill a well-defined purpose applying machine learning to big data. Their scope will most likely broaden in the future. But for what purpose would the Turing test be a useful benchmark? AI systems are not created to fool people into thinking they're interacting with a human instead of an AI.