Artificial Intelligence vs. the Human Brain

Apr. 14, 2025
Illustrated animation by Sam Island of robots at a conference
Sam Island

Sure, it would have been cute to run the hundred-page transcript of Columbia’s recent AI Summit through ChatGPT and receive, in seconds, a concise summary of the daylong event and then publish that as a short article to illustrate the utility and growing influence of artificial intelligence. But such an exercise would miss the point. In fact, the summit, which showcased Columbia’s leadership in developing, applying, and understanding this transformative technology, demonstrated that as adept as AI has become, human beings are still firmly in the driver’s seat (driverless AI-guided cars notwithstanding).

Organized by Columbia’s Data Science Institute and held on the Morningside, Manhattanville, and medical campuses, the AI Summit featured seven panels of Columbia faculty from a range of disciplines, including science, sociology, public health, business, law, architecture, economics, journalism, and the arts. The panelists explained how AI can benefit everything from medical research to sustainable design to workplace safety. GSAPP architecture professor David Benjamin ’05GSAPP described using AI-powered generative design to develop energy-efficient structures; behavioral ecologist David Sandalow, the inaugural fellow at SIPA’s Center on Global Energy Policy, explained how AI is being used to predict solar and wind patterns to improve green-energy production; and systems biologist Raúl Rabadán discussed his use of AI to reveal hidden molecular interactions within human cells.

The panelists theorized about AI’s potential, limitations, and social impacts and asserted that while AI surpasses the human brain in computational power, it falls short in emotional, social, creative, and tactile intelligence. AI can do “a lot of things really shallowly,” said computer scientist Lydia Chilton, while humans are “better at going deep and being experts.” Keynote speaker Sami Haddadin, a roboticist at the Mohamed bin Zayed University of Artificial Intelligence, talked about the progress and difficulty in getting AI robots to move adroitly and interact safely and reasonably with their surroundings. There was general agreement that, as computer scientist Christos Papadimitriou said, “AI is still far behind human brains.”

But the biggest questions centered on ethics and the need to regulate AI, which is susceptible to misuse. Garud Iyengar, director of the Data Science Institute and the organizer of the summit, emphasized that Columbia, with its world-renowned faculty across disciplines, was “uniquely positioned” to “ensure AI reflects human values and serves the public good.” (To learn more, visit ai.columbia.edu.)

Some panelists saw an uphill battle in taming AI. Joseph Stiglitz, the Nobel-winning economist, said the moneymaking incentives of AI firms are “not aligned with society”; computer scientist Rachel Cummings referred to a “Wild West” where people submit personal information to AI tools without knowing what’s happening to the data; law professor Clare Huntington ’96LAW pointed out that AI services like talk therapy aren’t held to any licensing standards; and visual-arts professor Naeem Mohaiemen ’19GSAS said he expected “massive dislocations in the valued, compensated work of creatives” due to AI’s ability to generate text, art, and music. Sociologist Gil Eyal, meanwhile, spoke of a societal crisis of trust in experts and of a widespread notion that because human judgment is biased, it should be replaced with the “mechanical objectivity” of AI — a problem in light of computer scientist Elias Bareinboim’s remark that AI systems, since they are trained on data from the real world, which is also biased, are “prone to unfair and unethical decision-making.”

For all the faculty firepower on display, the biggest star of the AI Summit was embodied AI — better known as robots — which in 2025 are assisting in surgery, laboring in industry and agriculture, and acting as home companions. It’s a bright new world, to be sure, though the cultural baggage of movies and books depicting intelligent machines gone rogue still weighs. Law professor Tim Wu, a scholar of technology policy, invoked the First Law of Robotics, as set down by Isaac Asimov ’39GS, ’48GSAS, ’83HON in his 1950 sci-fi collection I, Robot: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Wu warned that we were poised to “blow through that quickly and end up with unmanned, artificially intelligent killing.”

The lesson of the day was that AI’s future is still up to us. As the panelists made clear, AI promises to affect our health, our work, our environment, and our knowledge and will reshape life as we know it. “We need to start preparing for this new world now,” said law professor Huntington. “The bots, they’re not just knocking on the door. They are already in the room.”

 

Read more from