Artificial intelligence may be transforming the financial industry, but some firms are beginning to push back against a growing trend: graduates who rely too heavily on AI tools without demonstrating deeper analytical thinking. This pushback is reshaping hiring practices at major banks, hedge funds, and investment firms, as senior executives grow wary of candidates whose polished outputs often mask a lack of original thought.
According to a report by the Financial Times, the issue recently surfaced through experiences shared by senior finance professionals, including one New York financier who described his company's 2025 interns as the first group of “true AI natives.” These students had grown up using both digital platforms and generative AI systems, and initially appeared highly capable during recruitment. However, problems emerged once senior executives began testing their ideas more closely. While presentations and outputs looked polished, many responses reportedly lacked depth, originality, and independent reasoning. The result was a reduction in return offers and a shift in hiring priorities toward candidates with stronger critical thinking skills, including those from humanities backgrounds.
Finance Firms Want More Than AI Fluency
The broader finance industry continues to invest aggressively in AI. Major firms such as JPMorgan and Visa increasingly describe themselves as technology-driven businesses, while Nvidia recently reported that most finance executives believe AI is becoming critical to future growth. Yet despite this enthusiasm, real-world results remain mixed. A recent survey by Cambridge Judge Business School found that although more than 80 percent of financial firms now use AI, most deployments remain focused on back-office tasks rather than core strategic functions. The same survey also showed that many companies are struggling to measure AI's actual business impact. Only a minority reported meaningful profit gains, while a large percentage said AI had produced little noticeable financial change so far.
This disconnect is beginning to influence hiring and workplace expectations. Instead of simply looking for candidates who can use AI tools effectively, employers increasingly want people who can challenge AI-generated outputs, identify weaknesses, and apply independent judgment. For instance, at one leading investment bank, interviewers now specifically ask candidates to critique an AI-generated analysis of a financial model. Those who simply accept the output without questioning underlying assumptions are quickly disqualified. Another firm has started requiring all new analysts to complete a “critical thinking bootcamp” before they are allowed to use any AI tools on the job.
The Humanities Advantage
The shift toward valuing depth over AI fluency has led some finance firms to actively recruit from non-traditional backgrounds. Philosophy, history, and English literature graduates are increasingly sought after for their ability to analyze arguments, identify logical flaws, and communicate complex ideas clearly. One prominent hedge fund manager told the Financial Times that some of his best-performing analysts came from the humanities, not because they knew finance, but because they knew how to think. “We can teach them accounting in a month,” he said. “We can't teach them how to think.” This sentiment is echoed by other senior figures who warn that over-reliance on AI is creating a generation of “intellectual copycats” who can generate impressive-looking work but lack the foundational reasoning to solve novel problems.
Why This Matters Beyond Finance
The trend reflects a broader shift happening across industries. AI skills are becoming common, but companies are starting to differentiate between people who rely on AI for answers and those who can think critically alongside it. For students and young professionals, this could reshape what employers value most. Technical knowledge and AI familiarity remain important, but they are no longer enough on their own. Communication skills, reasoning, adaptability, and deeper subject understanding are becoming equally important in an AI-driven workplace. In the consulting industry, for example, several top firms have redesigned their case interview processes to explicitly test how candidates integrate AI outputs into their analysis. A candidate who presents a ChatGPT-generated market sizing without explaining the assumptions or verifying the numbers is now likely to receive a rejection.
Meanwhile, regulators are also becoming more cautious about AI's role in finance. Concerns around AI hallucinations, cyber risks, and automated decision-making are pushing financial authorities to develop safer testing frameworks and oversight mechanisms. The European Central Bank has published guidelines requiring financial institutions to document all AI-assisted decisions and to ensure that human oversight remains a central component of any automated process. In the United States, the Securities and Exchange Commission has launched investigations into firms that may have used AI tools to generate misleading client reports. These regulatory pressures are further accelerating the demand for employees who can critically evaluate AI outputs and assume accountability for the final product.
The Bigger Challenge Ahead
The growing consensus within finance appears to be that AI is most effective as an enhancement tool rather than a replacement for human thinking. As adoption accelerates, the firms likely to benefit most may not be those using the most AI, but those combining automation with employees capable of strong judgment and original analysis. This is already visible in the way some top firms are structuring their teams. At JPMorgan, for instance, new AI tools are deployed only after a two-week review period during which a committee of experienced traders and analysts thoroughly test the outputs against real market data. Any discrepancy, no matter how small, requires the tool to be redesigned before it can be used in any live trading environment.
The trend also has implications for university curriculums. Business schools are beginning to incorporate “critical AI literacy” into their core courses, teaching students not just how to use generative models but how to spot errors, evaluate sources, and construct original arguments. At Harvard Business School, a new mandatory module requires all first-year students to complete an “AI skepticism” project in which they must deliberately find and document at least five significant errors in a set of AI-generated financial reports. The module is already popular, with many students reporting that it transformed their understanding of AI's limitations.
As the finance industry matures in its use of AI, the definition of “AI-native” may also need to evolve. Being born into a world of AI tools is no longer a competitive advantage if everyone else has the same tools. The differentiator will be the ability to use those tools as a springboard for deeper insight, not as a shortcut for thinking. Firms are already reporting that return offers for 2026 internships have doubled compared to the previous year for candidates who demonstrated strong critical thinking in their interviews, even if their technical AI skills were only average. In contrast, interns who were highly proficient with AI but could not explain the reasoning behind their analyses are seeing their offers rescinded at an unprecedented rate.
That shift could redefine hiring trends over the next few years – and may explain why some finance firms are no longer fully sold on the “AI-pilled” graduate. The industry is beginning to realize that the most valuable hire in an age of abundance is not the person who can summon information fastest, but the one who knows what information is worth summoning, and why. As one senior executive put it: “We don't need people who can ask the right questions of a machine. We need people who know what questions are worth asking in the first place.” This nuanced understanding, grounded in years of human experience and cross-disciplinary training, is exactly what the current crop of AI-native graduates too often lacks. And until they can pair their technical fluency with genuine intellectual depth, finance firms will continue to look elsewhere for the leaders of tomorrow.
Source: Digital Trends News