The terms strong and weak don't actually refer to processing, or optimization power, or any interpretation leading to "strong AI" being stronger than "weak AI". It holds conveniently in practice, but the terms come from elsewhere. In 1980, John Searle coined the following statements:
- AI hypothesis, strong form: an AI system can think and have a mind (in the philosophical definition of the term);
- AI hypothesis, weak form: an AI system can only act like it thinks and has a mind.
So strong AI is a shortcut for an AI systems that verifies the strong AI hypothesis. Similarly, for the weak form. The terms have then evolved: strong AI refers to AI that performs as well as humans (who have minds), weak AI refers to AI that doesn't.
The problem with these definitions is that they're fuzzy. For example, AlphaGo is an example of weak AI, but is "strong" by Go-playing standards. A hypothetical AI replicating a human baby would be a strong AI, while being "weak" at most tasks.
Other terms exist: Artificial General Intelligence (AGI), which has cross-domain capability (like humans), can learn from a wide range of experiences (like humans), among other features. Artificial Narrow Intelligence refers to systems bound to a certain range of tasks (where they may nevertheless have superhuman ability), lacking capacity to significantly improve themselves.
Beyond AGI, we find Artificial Superintelligence (ASI), based on the idea that a system with the capabilities of an AGI, without the physical limitations of humans would learn and improve far beyond human level.