The current view suggests AI excels in divergent thinking—generating numerous possibilities—while humans are better suited for convergent thinking—refining ideas into clear solutions. With AI gaining reasoning abilities, I think this needs to be challenged.
AI has often access to huge amounts of data, can apply logic to them and then weighs the outcomes, selecting a most viable solution for the desired outcome. For instance, when tasked with recommending a marketing strategy, AI doesn't just look at past campaign success. It now factors in current market trends, consumer sentiment, and even projected future shifts. It’s not just throwing out ideas—it's reasoning through them.
Humans approach problems on a smaller scale and add throw intuition and experience into the mix. Our thought process is influenced by personal biases, emotions, and unique experiences.
This separation is the baseline for the argument of divergent thinking versus convergent thinking. The current pattern is that human thinking differs from AI because we use emotions and morals, allowing emotional judgments and moral considerations. But what if our feelings and ethics are actually learned guidelines? We often make choices based on past experiences and societal teachings. Is intuition not extracting patterns from data-lakes in our brains? Our minds use these patterns to navigate decisions, much like how AI relies on data and algorithms.
Following that thought, convergent thinking could very well be possible for AIs as well. Maybe not right now, but maybe also not that far away in the future.
AI thrives in objectivity and scale—it evaluates data systematically without bias or fatigue. Humans, however, excel in subjective reasoning, often using intuition, empathy, and creative thinking. And this leads again to my argument not to look at AI as separate tools in our toolset but as a partner in our endeavours.