On September 23, 2025, the academic world took notice when Dong Wang, Professor at the University of Illinois School of Information Sciences, released a landmark book that could redefine how we build artificial intelligence—not just smarter, but more human. Titled ‘Social Intelligence: The New Frontier of Integrating Human Intelligence and Artificial Intelligence in Social Space’, published by Springer Nature, the book isn’t just another tech manifesto. It’s a practical blueprint for designing AI that listens, adapts, and collaborates with people—not replaces them. Co-authored by Lanyu Shang (PhD ’24, Loyola Marymount University) and Yang Zhang (Miami University), the work introduces the emerging field of social intelligence: the idea that AI’s greatest potential lies not in outsmarting humans, but in working alongside them—especially in messy, real-world spaces like social media, disaster zones, and smart cities.
The Human-Centered AI Revolution Is Here
Wang and his team don’t just theorize. They deliver tools. The book lays out multimodal AI frameworks that combine text, images, and voice to detect misinformation on platforms like X (formerly Twitter) and TikTok. It proposes explainable AI models that can tell you why a system flagged a post as harmful—not just that it did. One case study shows how AI helped assess flood damage in rural Bangladesh by analyzing photos uploaded by locals, reducing response time by 60%. Another tracks how crowdsourced data from schoolteachers improved personalized learning algorithms in under-resourced districts. "The future of intelligence is not human versus AI," Wang said, "but human with AI, working together to solve complex social challenges." This isn’t isolated. On April 4, 2025, the World Economic Forum published a sobering analysis: "The AI revolution will separate winners and losers." And the winners? Not the companies with the most powerful GPUs—but those who mastered bottom-up movements. Their four-point plan? Find the super users already tinkering with AI. Form peer coaching circles. Reimagine workflows, not just automate them. And build collective resilience. An international manufacturer, for example, didn’t just add chatbots to customer service—they redesigned the entire support ecosystem, cutting resolution time in half while boosting satisfaction scores.Education Is the Missing Link
But here’s the problem: most engineers aren’t being trained to think this way. Enter the National Science Foundation. In early 2025, it awarded a $200,000, two-year grant to the Boston College Engineering Department to develop a new curriculum called Human-Centered Algorithm Design (HCAD). Led by researchers including Hira and Ranger, the project aims to embed ethics, social impact, and user empathy into every coding assignment. "AI systems are shaping how people make decisions in healthcare, transportation, education," Ranger noted. "Engineers need to understand the consequences before they press ‘deploy.’" By 2026, the team plans to release open-source syllabi, student project portfolios, and assessment metrics—tools any university, from a small liberal arts college to a large tech school, can adapt. Early pilot data shows students who learned through HCAD were 40% more likely to consider bias mitigation in their models than peers taught traditional machine learning.Reality Check: Most Companies Are Still Just Playing
Despite all this momentum, the numbers tell a different story. On November 17, 2025, EY cited a McKinsey & Company report revealing that roughly two-thirds of companies are still in the experimental phase—running pilot projects, testing chatbots, or running AI demos—but haven’t integrated it into core operations. Why? Lack of clear ethical guidelines. Uncertainty about ROI. And too many leaders still see AI as a cost-cutting tool, not a collaboration engine. That’s where UNESCO’s recent push comes in. In late 2025, Bangladesh launched its first national AI Readiness Assessment Report, a rare public framework prioritizing inclusivity, transparency, and human rights. Meanwhile, EY highlighted emerging safeguards like watermarking AI-generated content and embedding metadata to combat deepfakes—a necessary step as misinformation spreads faster than ever.
Real-World Impact: CodeFest 2025
The shift is already visible in action. On October 10, 2025, Virginia Tech’s Pamplin College of Business hosted CodeFest 2025, a 48-hour hackathon themed "From Booking to Belonging." Teams from Marriott International partnered with students to build AI tools that don’t just book rooms—they foster connection. One team created a health-aware travel platform that adjusts itinerary recommendations based on a user’s chronic condition or mental wellness data. Another slashed hotel check-in times from seven minutes to under 30 seconds—not by adding more servers, but by redesigning the entire experience around human anxiety and fatigue.What’s Next?
The next 18 months will be pivotal. The University of Illinois team plans to launch a public repository of their social intelligence models for researchers worldwide. Boston College’s HCAD materials will go live in early 2026. And if the World Economic Forum’s prediction holds true, we’ll start seeing entire industries—healthcare, logistics, education—reorganize around human-AI teams, not just algorithms. The question isn’t whether AI will change our lives. It already has. The real question is: Who gets to decide how? The answer may lie not in Silicon Valley boardrooms, but in classrooms, hackathons, and community-driven innovation.Frequently Asked Questions
What makes ‘social intelligence’ different from other AI frameworks?
Unlike traditional AI models focused on efficiency or accuracy alone, social intelligence prioritizes human-AI collaboration in complex social environments. It integrates context, emotion, and cultural nuance—like how misinformation spreads on social media or how disaster survivors use mobile photos to report damage. The framework doesn’t just detect patterns; it learns from human feedback loops, making systems more adaptive and trustworthy over time.
How is Boston College’s HCAD program changing engineering education?
HCAD requires students to map the societal impact of every algorithm they build—asking who benefits, who gets left out, and what unintended consequences might arise. Instead of just optimizing for speed or precision, students evaluate fairness, accessibility, and psychological safety. Early results show a 40% increase in ethical design decisions compared to traditional curricula, suggesting this model could become the new standard for tech education.
Why are so many companies still stuck in AI experimentation?
Many organizations lack clear ethical guardrails, leadership alignment, or employee buy-in. A 2025 McKinsey report found that only 34% of companies have formal AI governance policies. Without cross-functional teams and measurable outcomes tied to human outcomes—not just revenue—projects stall. The shift from pilot to scale requires cultural change, not just code.
What role do peer networks play in successful AI adoption?
The World Economic Forum found that top-down AI mandates fail. Instead, organizations succeed when they identify ‘super users’—employees already experimenting with AI tools—and create peer coaching circles. These informal networks spread best practices organically, reduce fear, and uncover hidden use cases. One manufacturer saw a 70% faster rollout of AI tools by empowering five frontline workers to train their colleagues, rather than hiring external consultants.
How is Bangladesh’s AI Readiness Report influencing global policy?
As a lower-income country with rapid digital growth, Bangladesh’s report is a rare model of inclusive AI governance. It prioritizes multilingual interfaces, offline functionality, and community input over proprietary tech. UNESCO is now using it as a template for other developing nations, showing that human-centered AI isn’t a luxury for wealthy nations—it’s a necessity for equitable progress.
Can AI really help travelers feel a sense of belonging?
Yes—through context-aware design. At Virginia Tech’s CodeFest, teams built tools that didn’t just suggest hotels, but recommended local cultural events based on a traveler’s interests, offered mental wellness check-ins during long flights, and connected solo travelers with verified local hosts. One prototype even adjusted room lighting and temperature based on stress signals from wearable devices. The goal: tech that doesn’t just serve, but connects.