Introduction
A growing number of professionals now rely on artificial intelligence (AI) tools in the workplace, and AI literacy has become an in-demand competency across functions in global organizations (Masterson 2023), used to automate operations for greater efficiencies and to drive innovation processes (Duke University Fuqua School of Business 2024). This has increased the need for academic institutions to equip students with AI tool knowledge (Masterson 2023), especially for product development (Piller et al. 2024). One key approach for product development is design thinking.The integration of experiential learning and project-based learning has gained prominence in higher education for its ability to enhance student engagement and foster real-world, practical skills. These approaches are especially beneficial in preparing students for the complexities of modern work environments, where adaptability, critical thinking, and creativity are highly valued. This paper uses experiential learning through project-based learning as a theoretical framework to describe design thinking activities in the classroom while integrating the use of generative AI. We begin by providing an overview of the extant literature on experiential learning, project-based learning, design thinking, and generative AI in education. We propose a framework for using AI tools while teaching a user-centric thinking approach to creating ideas for new products and businesses and share feedback from piloting the framework with students. We conclude by sharing challenges, limitations, and future directions for teaching with generative AI.
Literature Review
In this section, we will briefly examine the extant literature on experiential learning, project-based learning, design thinking, and generative AI in education. These interconnected pedagogical approaches have gained prominence for fostering active engagement, collaboration, and creative thinking in both physical and digital classrooms. The review will highlight how these methods are being implemented to support deeper learning and align education with the skills needed in today’s innovation-driven environment.
Experiential learning
Kolb (1984) conceptualized experiential learning as a process in which knowledge is created through the transformation of experience. Kolb's model outlines four key stages: concrete experience, reflective observation, abstract conceptualization, and active experimentation. These stages emphasize the importance of reflection and hands-on learning in fostering deep comprehension and practical skill development (Kozlinska et al. 2023). A significant advantage of experiential learning is that it engages students with real-world tasks, including examining problems and devising solutions, while facilitating an understanding of theoretical concepts and development of essential soft skills, such as communication, leadership, problem-solving, teamwork, and time management (Garvin and Ramsier 2003; Sachdeva and Latesh 2023). Experiential learning also encourages students to think critically by reflecting on their experiences, assessing outcomes, and adapting their strategies accordingly (Sachdeva and Latesh 2023).
Experiential learning can improve student retention and enhance comprehension by allowing students to apply knowledge in practical, real-world contexts (Practera 2022). This aligns with Bloom’s taxonomy, which emphasizes the importance of higher-order cognitive skills, such as analysis, synthesis, and evaluation (Krathwohl 2002). By integrating real-world applications into the learning process, experiential learning helps students develop these higher-order skills, which are crucial for their future professional success.
Project-based learning
Both experiential learning and project-based learning focus on active, student-centered learning, where students take control of their educational journey by solving real-world problems (Bhattacharyya et al. 2018). This sense of autonomy encourages students to engage more deeply with the subject matter as they see the immediate relevance of what they are learning. Dewey's educational philosophy, which posits that learning should be a process of living rather than mere preparation for future tasks, is reflected in both experiential learning and project-based learning (Dewey 1997).
Project-based learning distinguishes itself from other types of experiential learning by emphasizing long-term, inquiry-based projects that require students to work collaboratively, solve complex problems, and produce tangible outcomes (Blumenfeld et al. 1991), which makes it well aligned with a design thinking approach. In project-based learning, students work through structured problems, a process that enhances their ability to evaluate information, make informed decisions, and adapt their solutions as conditions change (Lavado-Anguera et al. 2024), which again here exemplifies its alignment with design thinking approaches. This iterative process, similar to the design thinking approach, where students plan, act, reflect, and adjust, not only helps them grasp theoretical concepts but also prepares them to apply these concepts in practical settings (Blumenfeld et al. 1991). In a project-based learning environment, students are active participants who take ownership of their learning by engaging in collaborative problem-solving and creative thinking (Sachdeva and Latesh 2023).
Design thinking
Design thinking has been described as an approach or style of thinking, and alternately as the examination of the cognitive processes that are expressed in design action (Cross 2007). David Kelley, a former Stanford professor who co-founded IDEO, a global design innovation consulting firm, in 1991, is acknowledged as coining the term “design thinking” (Camacho 2016). Design thinking is the ability to combine empathy, rationality, and creativity to analyze and develop solutions for a given context (Wrigley and Straker 2017).
The literature debates the origins of design thinking, but most agree it is an amalgam of theories, concepts, and practices from diverse disciplines (Chasanidou et al. 2015; Curedale 2013; Dam and Siang 2022; Szczepanska 2017). Early evidence of design thinking methods can be readily gleaned starting in the late 1940s within the architecture and engineering fields, which were grappling with the rapidly changing environment of the post-war era (Brown and Katz 2011; Dam and Siang 2022). The impact of World War II furthered strategic design thinking and its fundamental impact on global applications of management, production, and industrial design (Brown and Katz 2011; Dam and Siang 2022). This era necessitated the development of innovative approaches to solve increasingly complex global problems (Brown and Katz 2011; Dam and Siang 2022).
Various design thinking programs have emerged from industry and academia, including versions developed by Google, IBM, SAP, Babson College, Massachusetts Institute of Technology, and the University of Virginia Darden School (Chasanidou et al. 2015; Curedale 2013; Dam and Siang 2022; Szczepanska 2017). Two seminal versions emerged from Stanford University and IDEO (Camacho 2016). Each design thinking method’s process steps vary, but all agree on the primacy of developing empathy towards and understanding of the customer or end-user (Chasanidou et al. 2015; Curedale 2013; Dam and Siang 2022; Szczepanska 2017).
Generative artificial intelligence in education
While academia is still divided about the use of generative AI tools in the classroom (Dwivedi et al. 2023), initial research indicates that generative AI tools can improve and personalize learning and teaching processes (Chan and Hu 2023; Ciampa et al. 2023; Dai et al. 2023). To prepare students for the workforce, it is important to pair the use of generative AI tools with critical thinking in the classroom (Ogurlu and Mossholder 2023).
Most students report using more than two AI tools to support their studies, especially for researching, exploring ideas, and checking grammar (Digital Education Council 2024). Currently, ChatGPT is the most widely used tool (66%), followed by Grammarly and Microsoft Copilot (25%); additionally, students mentioned Claude AI, Blackbox, DeepL, and Canva image generator as frequently used tools (Digital Education Council 2024).
A Framework for Applying AI When Teaching Design Thinking
In this section, we propose a framework using the context of project-based learning to combine the strengths of human-centered design thinking approaches with the capabilities of generative AI tools to create a learning environment that fosters creativity, critical thinking, and technical proficiency. We propose a model that applies generative AI tools and teaches students to develop and practice critical thinking as part of a class incorporating design thinking. By merging design thinking with generative AI, students engage in dynamic problem-solving, experience iterative learning cycles, and develop both cognitive and technological competencies essential in today’s professional landscape (IDEO U, n.d.). Our framework combined the Stanford d.school design thinking method, with the stages of empathize, define, ideate, prototype, and test, with generative AI tools in different stages to allow students to accelerate developing a product or service to complete a project within the semester time frame. Applying AI and other digital tools lets students experience their possibilities but also their limitations.
Enhancing the empathy stage with AI
In traditional design thinking, students identify and interview stakeholders to gather empathy and insights into user needs and behaviors. While this process fosters research skills and analysis, it is often time-consuming and constrained by access to real-world stakeholders. Generative AI tools, such as language models and chatbots, simulate stakeholder interactions and generate diverse stakeholder personas. These tools allow students to efficiently create empathy maps to gain insights into user experiences (Brown and Katz 2011). Our hybrid model augments traditional methods by enabling students to broaden their scope of empathy-building activities while maintaining critical reflection on real-world user needs (Cross 2023). We propose that students use generative AI tools to develop questions for users and simulate feedback to overcome access to stakeholders for interviews.
Streamlining data synthesis and analysis
The define phase of design thinking involves gathering qualitative data, transcribing interviews, and categorizing insights into themes. This essential but labor-intensive work often adds a layer of time-management complexity for students. AI tools, particularly natural language processing applications, assist in automating the transcription and initial categorization of qualitative data. These tools can generate thematic matrices that students refine and analyze, allowing them to focus on interpreting insights rather than performing repetitive tasks (Magistretti et al. 2023). Our hybrid model enhances critical thinking while benefiting from AI's ability to streamline data processing.
AI-assisted ideation process
In the ideate phase, design thinking encourages the development of "How Might We" (HMW) questions to frame possible solutions. AI tools can rapidly generate numerous HMW questions and provide students with a broad range of ideation possibilities. This accelerates the brainstorming process, allowing for frequent iterations and the exploration of innovative solutions (Gerken et al. 2022). AI-enhanced brainstorming can lead to more diverse and creative outcomes than traditional methods, as students are prompted to consider ideas they might not have generated independently.
Prototyping with AI support
In the prototype phase, students create physical or digital models to test their ideas. In our hybrid model, AI tools assist with rapid creation of digital prototypes. AI-driven design software can generate multiple iterations of a prototype based on student inputs, enabling quicker feedback and adjustments. This rapid prototyping capability aligns with agile methodologies, offering students practical experience in iterating solutions efficiently (Razzouk and Shute 2012). Further, generative AI tools for text, images, and video, such as ChatGPT, Gemini, Claude, and Midjourney, can help develop and iterate product prototypes.
Testing and iteration using AI feedback
In traditional design thinking, the test phase relies on user feedback to refine and improve prototypes. AI can augment this process by simulating user interactions and providing feedback based on real-world data or simulated scenarios. This allows students to iterate quickly and test various iterations of their prototypes without needing extensive stakeholder engagement each time. AI's ability to accelerate feedback loops fosters deeper engagement with the iterative nature of design thinking.
Piloting the Framework
The second author piloted our model in a graduate capstone course where students worked individually on the development of a new consumer or Business-to-Business (B2B) product using the Stanford d.school design thinking approach. Each phase of design thinking, from empathy to prototyping, was structured with corresponding deliverables that built upon one another, reinforcing iterative learning and reflective practice. The course design emphasized the real-world nonlinearity of innovation, allowing students to circle back, refine assumptions, and shift direction based on emerging insights from data or peer input.
The cohort consisted of eight graduate students from disciplines including marketing, operations management, fashion, software engineering, and entrepreneurship. Many brought international perspectives, which enriched the feedback loops. Students noted in their reflections that the diversity of peer input enhanced their awareness of cultural context, especially for globally relevant products. “I hadn’t realized how different privacy expectations are in Europe versus the U.S. until a classmate pointed it out,” one student wrote.1
To replicate the kind of continuous feedback loop common in professional product development environments, the course embedded multiple forms of support. Weekly peer critiques were integrated into class time and tied directly to design thinking stage milestones. In these sessions, students used structured critique frameworks, such as “I like / I wish / What if,” to provide balanced and constructive feedback. Assignments were scaffolded so that each deliverable served as a building block for the next, fostering a rhythm of learning, reflection, and revision. In addition, instructor feedback was provided both in written form and through real-time consultation sessions modeled after industry-style design reviews. This recursive model ensured that learning was not linear or isolated but rather dynamic and iterative, mirroring innovation cycles in practice.
Students were explicitly encouraged to explore a variety of generative AI tools in each phase of the product development process. Tools were selected based on the specific cognitive tasks at each stage; for example, ChatGPT and Claude were often used for problem framing and market research summaries, Gemini and Perplexity were used for generating trend insights, and tools like Midjourney and Adobe Firefly were used for early-stage prototyping and visual concept development. The integration of AI was not merely supplemental; it was central to the pedagogical design. Students were asked “ use of these tools was governed by a robust AI policy outlined in the course syllabus. These guidelines, which were discussed in depth during the first week, focused on fostering a responsible and transparent use of AI. Key guideline elements included not inputting personally identifiable or proprietary information into open models, disclosing all AI-generated content, verifying any factual claims produced by AI tools, and maintaining transparency around sources. Students were also coached on the differences between open and closed LLMs and the tradeoffs between functionality, privacy, and cost. This ethical framing provided students with a foundation for both academic integrity and workplace readiness.
During the ideate stage of the course, students participated in a multi-step comparative brainstorming exercise designed to evaluate the value and limitations of both human- and AI-generated ideas. Prior to this activity, students had conducted user research interviews and built empathy maps to articulate user pain points. They had also used the Business Model Canvas (Osterwalder and Pigneur 2013) to identify market gaps and define their value proposition hypotheses.
In the first step of the exercise, students crafted AI prompts tailored to their projects, describing their users, challenges, and goals. These prompts were tested using AI tools of their choosing. Students documented the five most promising ideas generated by the AI and kept them confidential during class to avoid influencing peer brainstorming.
In the second step, students presented their prompts to classmates, who then generated human ideas in real-time during a live sticky-note session. These sticky notes were collected, and students selected the top five ideas from peer contributions for further evaluation.
The third step was a structured analysis comparing the AI-generated and peer-generated ideas with the top three ranked ideas selected. Students used a worksheet that included rating scales for feasibility (can we build it?), desirability (do users want it?), and viability (is it sustainable?). They also indicated the source of each idea and justified their rankings. Many noted that AI ideas were often more abstract or feature-rich but less attuned to user context. One student reflected, “The comparison shows me that while AI can produce innovative ideas, the insights from class discussions are often more in tune with real customer needs, resulting in stronger, more relevant solutions. Also worth mentioning that 1 of the 3 top ideas come from AI.”
Finally, in the fourth step, students completed a reflection survey where they articulated which tools they used, how their prompts evolved, and how their strategy changed after comparing AI and human ideas. Students were also encouraged to reflect on the limitations of AI, the quality of peer feedback, and any surprises from the process. Student commentary ranged from pro-AI: “Interesting exercise. In the near future, people will use generative AI more and more often, so it was exciting to explore its usefulness” to pro-hybrid: “AI is helpful for the feasibility and viability options, but the desirability score comes from the actual people” to con-AI: “The ideas were not THAT original! It is such a reminder that we should use human ideas first and use AI to supplement.”
Students frequently used multiple AI tools throughout the course and adapted their tool choices depending on the needs of the phase. For example, while ChatGPT was used broadly, students found that Gemini was more useful for organizing research synthesis, and Midjourney was preferred for fast visualization. Some used GrammarlyGO or Notion AI for editing product pitch decks. This adaptive experimentation was encouraged, and students who tested multiple tools were often the ones who expressed the deepest insights about tool limitations.
Throughout the course, peer-generated insights were consistently cited as more emotionally resonant and context-specific. One student reflected, “The process of talking things out really helps spur new ideas and also the enthusiasm of everyone keeps me motivated…when I ideate with/for classmates projects, it not only helps them but helps me think in different ways about my own project.” Another student wrote, “Unsure if gen AI is capable of coming up with TRULY innovative and brand new ideas; trained on datasets of information that already exists.”
To assess student learning, the instructor employed a mixed-methods evaluation approach. Students submitted written reflections, completed the AI-human ideation worksheet, and delivered final project presentations. Evaluation rubrics prioritized process over product, with particular emphasis on iteration, clarity of thought, and ethical tool usage. This assessment approach was rooted in constructivist learning theory (Kolb and Kolb 2022), encouraging students to reflect on their own learning journeys while building critical thinking, ethical reasoning, and creative confidence (Brookfield 2012). Students were not only graded on what they built—but how they got there, what they learned in the process, and how responsibly they used emerging technologies to enhance human-centered innovation.
Limitations and Challenges
During the pilot, students enjoyed experimenting with different AI tools and gained proficiency in utilizing them, mimicking how these tools are applied in the workplace. Nonetheless, students also noted the current limitations of open AI tools. While the AI tools provided generic, and often creative, ideas, the answers lacked user-specific examples and industry expertise. Students voiced concerns about intellectual property protection considering that uploaded data might be used to train large language models (LLMs) and further develop generative AI tools (Lawton 2024). Closed AI tools offer more specific results and ensure better data protection. However, access to proprietary AI systems remains limited due to costs (Lawton 2024). For some stages of our proposed model, the tendency of generative AI tools to hallucinate (IBM, n.d.) and provide incorrect information, including non-existing references, poses additional challenges for students and their product designs. Finally, results are better when students have at least a basic level of AI literacy and practice in training the AI. This allows them to define prompts clearly and support queries with customer data or other relevant contexts. Finding the right balance of speed and efficacy for using AI tools might require more training (and time) for instructors and students.
In addition to technical constraints, the integration of generative AI tools in the classroom raises several ethical, pedagogical, and environmental concerns that must be addressed to ensure responsible implementation. One of the most pressing issues is academic integrity. The ease with which students can generate polished text using tools like ChatGPT has led to concerns about authorship, originality, and the erosion of critical thinking skills (Dwivedi et al. 2023). This blurring of intellectual ownership can undermine the learning process if not carefully guided through explicit instruction and transparent usage policies. Further, these tools are trained on vast datasets that often contain biased or culturally narrow representations, which can result in outputs that reinforce existing societal inequities or exclude marginalized perspectives (Gleason 2022). Students in the pilot noted that some AI-generated ideas lacked cultural sensitivity or nuanced understanding of user contexts, underscoring the need for critical interrogation of content.
Another challenge lies in the reliability and validity of generative AI outputs, especially when attempting to use AI to simulate real-world testing and iteration cycles. One of the course’s pedagogical goals was to leverage AI not only as an ideation tool, but as a surrogate for early-stage user feedback. The intent was for students to refine prompts and iterate on product concepts using AI-generated critique, simulating the types of feedback one might receive from consumers, experts, or test markets. However, this goal was complicated by the phenomenon of AI hallucinations, which are confident yet inaccurate or entirely fabricated outputs, including false claims and citations (IBM, n.d.). Students reported instances where tools such as ChatGPT or Gemini confidently provided market statistics, user insights, or legal information that were unverifiable or demonstrably false.
Concerns about the unreliability of AI generated content led to class discussions around epistemic vigilance, the necessity of cross-referencing, source validation, and cultivating skepticism toward polished but unverified AI content. While these challenges created valuable teachable moments, they also impeded our ability to fully simulate testing-and-feedback loops through AI alone. The inability of generative AI tools to reliably replicate user empathy or provide nuanced critical feedback limited their effectiveness as proxies for human testers. Several students ultimately turned to peers for more reliable feedback, underscoring the current limitations of AI as an iterative development tool. As one student reflected, “[AI] lacked depth and practical understanding of specific customer needs.”
In addition to epistemic challenges, environmental sustainability emerged as an unexpected area of concern among students. During a reflective discussion, multiple students raised the issue of AI’s carbon footprint, particularly in relation to large-scale model training and deployment. Research shows that training a single large language model can emit as much carbon dioxide as five cars over their lifetimes (Strubell et al. 2019). Students expressed ambivalence about using energy-intensive AI tools in a course where sustainable development, including discussions around the UN Sustainable Development Goals, were embedded as considerations. Privacy and data ownership concerns also surfaced repeatedly. Students were uneasy about inputting proprietary product ideas or user personas into open-access platforms, many of which reserve the right to collect and use user inputs to further train commercial models.
These overlapping concerns, hallucinations, environmental impact, and data privacy, reinforce the necessity of embedding digital ethics, sustainability literacy, and privacy awareness into AI-enhanced pedagogical frameworks. Students should be taught not only how to use AI tools effectively, but also how to critically assess their limitations, ask difficult questions about their societal implications, and make informed choices about when, why, and whether to use them at all.
Benefits of the hybrid approach
Integrating AI tools in the design thinking process can enhance both the efficiency and effectiveness of learning experiences. AI technologies streamline time-intensive tasks such as stakeholder identification, data transcription, and prototype generation. By automating these processes, students can dedicate more time to higher-level analytical thinking and creative problem-solving, which fosters deeper engagement with the material (Cross 2023). This efficiency allows educators to facilitate more dynamic learning environments where students can focus on refining their critical thinking skills rather than being bogged down by logistical challenges.
In addition, AI integration makes the hybrid model highly scalable across various disciplines and class sizes, offering solutions for handling larger datasets and simulating complex user interactions. This scalability ensures that our model can be adapted for a wide range of educational settings, providing a consistent yet flexible learning framework. The use of AI also supports skill development, enabling students to cultivate both creative problem-solving abilities and technical competencies in AI technologies, which are increasingly essential in the modern workforce (Razzouk and Shute 2012). Further, the ability of AI to generate ideas, prototypes, and feedback rapidly fosters an innovative learning environment. This dynamic interaction allows students to experiment with diverse solutions, iterate on their designs fluidly, and engage in deeper innovation processes.
Our hybrid model that integrates design thinking with AI potentially presents a transformative approach to classroom education. By leveraging AI technologies alongside traditional design thinking methodologies, educators can offer students a more efficient, scalable, and engaging learning experience. This model not only retains the essential human-centered elements of design thinking but also enhances the overall process by incorporating AI’s ability to streamline tasks and provide dynamic learning opportunities.
Future Research
Building on the insights from our initial pilot, we are planning a second implementation for Fall 2025 with a globally diverse cohort of twenty to twenty-five first-year MBA students studying Luxury Brand Management. During the first pilot, students enthusiastically experimented with a range of generative AI tools and developed confidence in using them to mimic real-world applications. However, several limitations emerged that will directly inform the redesign of the next pilot. Students reported that while open-access AI tools generated creative outputs, the content often lacked user-specific detail, cultural nuance, and deep industry insight, which are critical elements in luxury brand strategy. Concerns around data privacy, intellectual property protection, and the reliability of AI-generated content were raised, especially when tools confidently produced fabricated statistics, legal claims, or user feedback without verification (Lawton 2024; IBM, n.d.). These hallucinations not only disrupted learning but also challenged our pedagogical goal of simulating real-time feedback cycles using AI as a proxy for consumers or test markets.
Based on this feedback, the Fall 2025 pilot will include targeted training modules in AI literacy for both students and faculty, covering prompt engineering, data validation, and critical assessment of AI outputs. We are also planning to integrate structured peer-to-peer feedback checkpoints to supplement AI-generated critique and better simulate the iterative nature of product development in the luxury sector. Unfortunately, closed-system AI platforms with stronger data protection and greater contextual specificity will not be available for the second pilot. To foster ethical awareness, the curriculum will more deeply and explicitly embed discussions of AI’s environmental impact, academic integrity, and algorithmic bias. The second pilot iteration will also include a learning objective that requires students to evaluate not only their products but the AI-augmented design thinking process through the UN Sustainable Development Goals. Students will be encouraged to more critically evaluate when, why, and how to use AI, recognizing that effective innovation also requires selectivity, reflection, and human judgment.
These enhancements will not only respond to technical and ethical concerns but also aim to elevate the value of the learning experience by encouraging deeper engagement with both the capabilities and limitations of AI in a global, luxury-focused context. The outcomes of this second pilot will provide a more nuanced understanding of how generative AI can be responsibly and effectively integrated into entrepreneurship and innovation courses across diverse cultural and industry-specific environments