Introduction
When the Stockton-Darlington Railway opened in 1825, there was growing fear in what many regarded as an “unnatural” means of transportation (Tees Valley Museums n.d.). Such technology, allowing humans to travel up to speeds of 20 miles per hour, was thought to pose a serious health risk to the human body, one that would cause death from asphyxiation, or in later accounts “railway sickness” (Welle 2023). Some farmers feared that the noisy trains would stop hens from laying eggs or even turn milk sour (Tees Valley Museums n.d.). Fear however was not completely unfounded as early petitions against the opening of the world’s first public railway noted several concerns regarding the impacts of this new technology on existing industries. The stagecoach and canal industries, for example, stood to lose their business in transporting passengers, coal, flour and other goods across the English country (Tees Valley Museums n.d.). There were also class-based concerns over how track lines were going to be built over privately owned estates, or how the development of a more accessible means of transportation would allow members of the lower classes to travel more freely and so disrupt the social order and threaten moral standards (Tees Valley Museums n.d.).
As with the heyday of the industrial age in Western Europe during the 18th and 19th centuries, across the globe today the social world is undergoing a technological transformation in what has come to be described as the digital revolution or the digital age. In today’s digital society, particularly in a “post-pandemic” world, everyday life is increasingly mediated through digital and communication technologies, and the ongoing developments in mobile technologies ensure that individuals can have access to the internet and the World Wide Web at any time and location. Everyday activities are becoming more integrated with digital technologies, and everything from banking to dating either begins or exists entirely within the sphere of the digital realm. It is estimated that in today’s world, individuals between the ages of 16–64 spend an average of 6–7 hours, almost 50 percent of their waking life, in front of a screen (Binns 2023).
Understandably, as with fears of new technologies in the past, these new developments, and the increasing migration of the social world into the realm of the digital, are being met with apprehension. How much is too much screen time? What corporeal dimensions of human activity and social life are being lost through digital interactions? Do digital technologies threaten to deskill large sections of the labor force, if not outright obliterate whole industries? And is there such a thing as “digital sickness”? Variations of these fears express themselves within the education sector, as digital technologies such as generative AI pose what institutions of higher learning describe as a moment of “crisis.”
So far, much of the concern around the use of generative AI has tended to focus on issues of ethics and with particular attention to issues of student evaluation and plagiarism. According to a recent report published by Wiley (2024) that surveyed both instructors and students at institutions of higher education, most instructors (68 percent) believe that generative AI will have a negative or significantly negative impact on academic integrity in the coming years (Coffey 2024). Tellingly, of the over 2000 students surveyed, almost half said that it is now easier to cheat due to the increased use of generative AI (Coffey 2024). If software such as ChatGPT can be used to generate novel, untraceable, and intelligent reading text on any given topic, what then is the future of the term paper or the art of writing?
However, for educators committed to critical and engaged learning practices, a crisis in institutions of higher education has been a consistent feature of the modern, marketized university. Ingrained pedagogical practices that are premised on hierarchical relations between instructors and students, as well as practical impediments to more engaged learning practices such as heavy course loads for instructors, large classes, and the resultant generalized forms of course evaluation have all contributed to perpetuating what Paulo Freire (2000) described as the “banking system of education.” In a banking system of education, students are treated as passive consumers of information, with the teacher adopting the sole mantle of expertise (Freire 2000). The expert in this form of the teacher-student relation is tasked with filling the students with the contents of their narration, where knowledge is treated as a “gift” which is simply memorized and passively retained. In its political dimensions, Freire (2000, 73) argues that this model of education has as its intention not the cultivation of full and critical human beings, but beings that are manageable, passive, and less likely to intervene in the world of which they are a part.
Our current moment comprises a dual crisis, where digital technologies such as generative AI are finding headway into a field of higher education that was already struggling to critically engage and inspire students in ways that break out of what Freire (2000, 80) calls the “mythos of learning.” This current mythos and general absence of engaged learning practice cannot be ignored when attempting to account for both the extent and form of the impact that generative AI is having on institutions of higher education.
Yet it is precisely at times of crisis where there can be room for growth and progress. This paper argues that our current moment of crisis in the digital age offers a valuable opportunity for institutions of higher education to fundamentally reimagine pedagogical practices in the direction of more critical and engaged learning. The question, however, is what constructive role can digital technologies, and in particular generative AI, play in this potential moment of transformation?
In what follows, this paper will first outline some practical strategies based on the author’s own teaching experiences of how generative AI can be leveraged in the service of countering the predominating banking model of education and offering students a more engaged learning experience. The following subsection will then outline some of the existing limitations in digital technologies such as generative AI that create barriers to engaged learning practices and the fostering of critical thinking in the classroom. These limitations include issues of safety and surveillance when utilizing such platforms, the underlying gendered and racial biases informing the existing available generative AI models, and fundamental limitations with respect to generative AI in meeting the embodied, affective, communal, and lateral (as opposed to vertical) dialogical dimensions that accompany the practice of engaged learning in the service of raising critical consciousness (hooks 2014). This paper concludes that generative AI, like the broad variety of other digital technologies already integrated into institutions of higher education, can behave as an effective tool when wielded by an effective instructor in the service of engaged learning. However, in order to be effectively utilized for this intention, fundamental barriers with respect to the form and functions of the tool itself, in addition to predominant pedagogical practices in institutions of higher education, will need to be changed.
Leveraging digital resources for engaged learning
In an introductory social science course I taught, we prefaced our overview of the final writing assignment with a discussion of generative AI. To my surprise, some of the students openly admitted to seeing no problem with utilizing a gen AI tool such as ChatGPT to aid them in writing their paper. When I questioned them further, their truer opinions of writing in our current age became clear, captured in one student’s argument that the emergence of generative AI should be treated no different than the emergence of the calculator: why should we work out mathematical equations if a calculator can do it instantly? Why should we bother writing anymore if generative AI can write for us? There is a sense of the impracticality of certain skill building exercises such as writing given the current technological developments at hand. This sentiment is evident not only with respect to assignments such as essays or term papers but also in the widespread use of tools such as ChatGPT in writing emails and even text messages (Rausch 2024).
There are broader systemic issues that are also influencing why students may turn toward generative AI tools to assist them with writing or completing an assignment. In the same report cited above (Wiley 2024), a majority of students noted the role of emerging technologies and of generative AI tools such as ChatGPT in making it much easier for students to cheat than before. When asked why more students may be turning toward “cheating” in the age of generative AI, almost half of the students responded that because education is so expensive, there is an added pressure to pass or attain certain grades (Wiley 2024, 16). Further, 36 percent felt that students are more willing to cheat because it is hard to balance going to school with work or family commitments (Wiley 2024, 16).
In a recent class, I invited a formerly incarcerated individual as a guest speaker to share their experiences in the criminal justice system and of their time being incarcerated. She left a lasting message with the students when she said, “don’t judge a person’s choices if you don’t know their options.” The reality of the pressures facing students today, be it the cost of education or the difficulties many students face in managing school-work-life balances, all play a contributing factor to the growing reliance on generative AI tools in higher education. Students are often caught in between a rock and a hard place, and if they are unable to feel inspired, or sense the significance of the course material to either their own lives or the real world, they are more likely to resort to what we would describe as cheating (Wiley 2024, 15).
Short of eliminating tuition fees and providing financial support to students with respect to their cost of living, there are ways that instructors in institutions of higher education can help students develop a greater sense of investment in what, and how, they are learning.Within this context, there are ways in which encroaching digital resources, such as generative AI, can be leveraged in the service of cultivating more engaged learning practices which can help counter some of the pedagogical and systemic barriers that are hindering students from feeling fully engaged.
In my own teaching, I strive to meet students halfway and to engage them in ways that are practical, fun and that strive to move beyond predominating institutional pedagogical practices that replicate the banking model described above. In the prototypical lecture, students are treated passively, talked to or lectured to, and expected to behave only as retainers of the information that is being passed down to them from the expert in the room. There is minimal collaborative engagement, and dialogue is almost never given space. This condition is only being magnified in our current context as the size of the average class, particularly those directed to first or second year students, continues to increase with larger batches of students from across disciplines grouped together. Class sizes and the spatial organization of lecture halls hinders dialogue and mutual exchange, key dimensions to breaking out of the authorizing, contradictory, and binary teacher versus student relation (Freire 2000; hooks 2014).
There are digital tools at our disposal such as online polls, game-based platforms (Boury 2022), PowerPoint presentations, and audio, video and image resources which have all proved invaluable in decentering the gaze from the instructor. For example, one of the practical exercises that has sparked a degree of dialogue in my courses is the use of online polling software. At the beginning of class, students are posed a question which may or may not be relevant to the course material. As they submit their answers using their mobile devices, the results are broadcast live on the screen. By engaging in this simple exercise, students become more aware of each other’s presence. Based on the results of the poll, they get a sense of each other’s interests, potential contributions, and possible shared positions. A practical exercise such as this helps students develop a sense of belonging to a “classroom community” (hooks 2010) and allows them to feel more invested as co-creators of the content produced in the discussion of the results that follows. In this way, such an exercise speaks to how decentering the teacher-student dynamic is not an immediate or simply intellectual affair but an aspired goal that must be reached through practical means and exercises. Creating an engaged and democratic learning experience in the service of critical thinking is a process. At a very basic level, students need to first become aware of each other in the room.
Generative AI too can be leveraged and adapted to help contribute to pedagogical practices which are geared toward engaged learning. For example, in the larger introductory courses I taught the previous semester, I helped develop a group assignment in partnership with a community-based campaign against prison expansion in Ontario, Canada. Students were tasked with working in small groups to develop a visual or audio display that critically engaged with issues relative to the practice of incarceration as a means of contributing to community safety. Not all students have skills in graphic design or audio-digital skills, so generative AI was left open to them as an option that they could rely on to generate their content.
The essence of the assignment—which included critical thinking, engagement with a contemporary “real world” social topic, group and collaborative work, and project management—were not sacrificed but supported through both the organization of the assignment itself and the utilization of a generative AI resource. Such adaptations, no matter how meager, are necessary at a moment when a new generation of students is increasingly immersed in digital technologies and when outdated forms of instruction and evaluation are becoming harder to maintain. This year, drawing inspiration from visual artists who utilize generative AI in their work, students in my course on Violence in Society will work collaboratively in small groups to develop digital collage displays. The images incorporated in their collages will be shaped by specific prompts submitted to the generative AI tool and will draw on readings and class discussion from the course.
Limited as we are with the resources and current conditions that we find ourselves teaching in, we can adapt our modes of instruction and evaluation components to cultivate spaces that tend toward engaged dialogue. Even if a large lecture hall does not completely allow interactive discussion between instructor and students, or between the students themselves, the examples above demonstrate how the seeds of engagement can be planted to help students feel invested, heard and engaged in ways that help counter the banking model.
Some of the limitations noted above, such as the batching of students in large classes and the relatively limited degree of engagement between teacher and student, have figured into the marketing of large language models (LLMs) like ChatGPT as “an intelligent question-answering system” (Feuerriegel et al. 2024) that can function as a an AI-powered personal tutor or digital teaching assistant. For example, Khanmigo, a ChatpGPT4 powered chatbot is currently marketed as able to adopt a Socratic approach to learning which is said to be uniquely interactive, able to prompt users by asking them leading questions, and guide students toward the correct answer instead of simply providing it off the bat (Khan 2024).
The AI-for-education industry argues that existing institutional limitations in the education sector can be remedied through the use of generative AI tools which will help close the learning gap for students, particularly those from marginalized backgrounds, by offering a more personalized education experience (Gates 2024; Khan 2024). Thus, “AI-powered teaching” (Khanmigo n.d.) is increasingly framed as a redemptive intervention into some of the structural limitations in the education sector. Yet, it is in this frontloading of the redemptive role of generative AI, not merely as an ancillary tool but as a stand-in teacher, teaching assistant or tutor, that some of its limitations are necessary to address.
The limits of digital resources: The case of generative AI
One of my most memorable and fulfilling moments as an educator was near the end of the term when one of my undergraduate students walked with me after class and with complete sincerity in his eyes professed, “I now realize that what I think is good is not necessarily what someone else thinks is good. Thank you.” There is a simplicity in such a realization but it expresses a degree of openness, courage, trust, and will. To let go of one’s preconceived notions, or to open oneself to the perspectives of others, is a liberating act that transcends the terrain of the conceptual and demands of teachers the cultivation of a space that allows for such exploration. I like to believe that as an educator, I cultivated a space which allowed my students to express themselves the way that they did.
hooks (2014) draws our attention to the holistic dimensions of learning and unlearning when she expands Freire’s model of the “dialogic model of education.” For Freire (2000) the basic act of dialogue challenges the vertical teacher-student dichotomy by opening space for discussion and integrating the concrete and lived experiences of the interlocutors involved. hooks (2014), building on feminist theory, adds that the teacher is not only tasked with engaging in the act of dialogue but, if they are to effectively go beyond hierarchical modes of instruction, is equally tasked with creating an atmosphere that is attentive to the mind, body and spirit of the students.
Only a “holistic approach” (hooks 2014, 14) which seeks to create an atmosphere of safety, mutual respect, the demonstration of care and concern, and a willingness to listen and keep space for one another, can help teachers engage with students in critical dialogue. Accordingly, the art of dialogue requires the cultivation of trust, love, mutual respect, care and empathy if those participants involved, both teacher and student, are to move past their pre-existing fears, forms of socialization, and other walls and barriers that keep them from listening with ears that are clear or seeing with eyes that are open. In this regard, critical thinking is just as much a thinking exercise as a feeling or emotive exercise.
With these considerations in mind, it is important to temper the marketed capabilities of generative AI as a Socratic or dialogic tool that can be utilized by students to supplement the limitations in existing educational institutions. This tempering is more than warranted since a chatbot can only ever engage with a student on a conceptual level. In this strictly conceptual or mind-focused sense, the generative AI tool may be able to engage in debate, conversation or spur questions for the students but it is unable to address their heart, soul or body.
This limitation need not only concern broader philosophical questions concerning the potential affective qualities of artificial intelligence but can be gleaned through the example of a very basic form of interaction between a human and a generative AI model. For example, in a recent video posted by OpenAI to YouTube introducing the ChatGPT-4o model, one of the marketed selling points was its ability to generate a bedtime story instantaneously. The interlocutors requested the tone in which the story was delivered as well as its content. Throughout the demonstration, the interlocutors interrupted it whenever they wanted, which was heralded as a significant improvement in this latest model. At every interruption, the AI model’s feminized voice was pre-programmed to not demonstrate any disapproval and instead to respond in a welcoming tone.1
This example is illustrative for many reasons, one being that for those of us who have ever narrated a bedtime story to children, we know that the combination of interruption and storytelling is half of the experience. When I recite a bedtime story to my daughters, I am consistently interrupted, and yet to maintain both the consistency and direction of the story, I have to consistently push back. Through holding my own space and integrity in the storytelling process, I am not only interested in sharing the content of my narration but in cultivating in my daughters a valuable lesson: the importance of respect, of cultivating patience, and learning how to hold space for others while also managing oneself in ways that are less impulsive. Through the exercise of patience and respect, my daughters have a different relation to the content of the story, whether it be how they interpret that content, or through their basic ability to pay attention to that content. In this experience, my daughters and I can appreciate the embodied and affective dimensions to learning and dialogical exchange.
To what extent can a generative AI model such as ChatGPT, a statistical engine for pattern matching, do the same? Can it know how to appropriately halt a discussion, read a student’s emotion, or when it may be warranted to steer the discussion in ways that are momentarily impractical, illogical and uncertain? As noted above, the act of learning, growth and dialogue is premised on more than simply the exchange of words or data, and encompasses, whether addressed or not and the affective dimensions of those involved. If we draw from the wisdom of bell hooks, we can appreciate how in its current form, the valuation of generative AI as a potential teaching assistant or tutor, or an intelligent and potentially dialogic question-answering system, is a valuation premised on a very masculinist and disembodied model of learning and exchange.
For other embodied reasons, as a racialized educator, I am also aware of the degree to which my basic physical presence in the classroom can help open doors to the cultivation of feelings of safety, confidence and trust which by the very fact that students from diverse backgrounds can see themselves, their families, and their histories being reflected in the classroom dynamic as well as in the course material. Representation in education matters and students from multiple backgrounds and sites of marginalization, whether based on gender, sexuality, race/ethnicity, or ability, may develop different degrees of safety and a healthy sense of challenge depending on both the diversified course content and who the teacher-guide in the room is. To be clear, representation alone is not enough to foster a sense of mutuality and safety in the service of critical consciousness raising but it can play a significant role in helping foster conditions of understanding, dialogue and openness between students and teachers in the service of an engaged learning experience for all.
What does this mean or look like for students engaging with an authoritative, presumptively aracial, entity? Though it is safe to describe generative AI models such as ChatGPT 4-o as without race or sex—they do not have any visible characteristics that would allow society to categorize it according to socially constructed definitions of race, or sex organs that would lend to it being ascribed characteristics such as masculine or feminine—they are nevertheless raced and gendered. This underlying racial and gendered dynamic exists with respect to biases that are found in the sources of data from which AI models generate their human-like content and that are reflective of the existing social, cultural, and political biases that frame the organization of the social world.
To grasp why this is so, it is important to remind ourselves that generative AI models, despite the moniker of intelligence, are machine learning systems—a “statistical engine”—that differs significantly from how human beings actually reason or use language (Chomsky, Roberts, and Watumull 2023). Generative AI may quantitatively surpass human abilities in terms of processing speed or memory size, but these models are limited to only identifying probabilities, making predictions, or inferring “brute correlations” based on the available training data: “Whereas humans are limited in the kinds of explanation we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round.” The limitations of this are concerning since for example in 2016, Microsoft’s Tay chatbot—the precursor to ChatGPT—flooded the internet with racist and misogynistic content after online trolls filled it with offensive training data (Chomsky, Roberts, and Watumull 2023). Absent the capacity to reason from any moral or ethical principles, or function with a sense of discretion, Generative AI systems are restricted in what they generate simply based on the information that is supplied to them2.
The racial and gendered limitations of generative AI are due to the inequities reflected in the data supplied to it but also how it collects data. For example, data requires technological infrastructure and tech infrastructure is unevenly distributed around the globe (Yu, Rosenfeld, and Gupta 2023) with the Global North possessing most of the capacity as well as capabilities to supply the data necessary for training AI models. This digital divide results in skewed input and, consequently, generated data that does not reflect the cultures, languages, knowledges, and experiences of communities that do not have large data sets. This divide is not necessarily regional but exists within the Global North itself depending on what experiences, present and historical, have been valued enough to record, digitize, and which frame the mainstream knowledge on any given subject. Take as one example how ChatGPT was unaware or unable to account for the significance of Blues singer Bessie Smith in shaping generations of blues, jazz and rock musicians (Nkonde 2023). Trained on existing data that minimizes the artistic and intellectual contributions of Black women, the generative AI model replicated these biases, generating content that was racist and sexist.
Such underlying biases are important to consider when imagining the potential role to be played by generative AI resources in institutions of higher education and in the service of critical and engaged learning. Treated as authoritative sources of information, another limitation with respect to their capacity to contribute to lateral forms of dialogue and engagement with students, generative AI models risk perpetuating existing racist, sexist, and imperialist discourses which hinder the cultivation of critical insight among students about conditions in the real world.
Such biases, however, are not an exclusive limitation of generative AI, and can be encountered across institutions of higher learning. Yet, this is also why in the service of fostering engaged learning practices, hooks (2014) emphasizes the collective as opposed to individualized dimensions of critical thinking. Disrupting the authority of the teacher-as-expert model and the implicit biases that instructors may perpetuate requires a willingness to acknowledge the contributions to be made by a “classroom community.” A community of learning has at its disposal the wealth of diverse experience and expertise that can help decenter epistemic authority and foster a richer learning experience.
This corrective afforded by education as a collective, embodied practice is difficult to imagine with respect to generative AI which is instead actively marketed and lauded by developers for its personalized dimensions. Generative AI’s personalized form of instruction and engagement is seen as a more desired or beneficial form of student engagement and yet ignores the role of community building—a complicated, multidirectional and sometimes scary affair—and of collective effort in helping to guide and enrich learning experiences. The reduction of education to an individualized effort or personal exercise is not entirely new and fits within the broader neoliberalization of education, and of social life in general, where both the critical and communal aspects of learning are consistently underemphasized. Within the context of the neoliberalization of education (Mintz 2021), the limitations of generative AI are not limits at all but simply more efficient means of cultivating passivity instead of critical thought, individuation instead of community, and conformity instead of freedom.
A final limitation which this paper will address concerns issues of safety and surveillance and the potential risks involved for both teachers and students when utilizing generative AI platforms. Though generative AI models are trained on publicly available data, their design and development takes place within the sphere of private industry. What degree of democratic decision making is taking place within privately owned industries and what level of accountability or social responsibility do they have to the broader public? Some of these concerns were recently raised following the inaugural AI Expo for National Competitiveness where tech giants like Google and Microsoft discussed AI, the military, and the future of warfare (Haskins 2024). The conference’s lead sponsor, Palantir Technologies, was made infamous at the height of the Trump Presidency’s family separation policy after it was revealed that it was providing digital tools to the U.S Immigration and Customs Enforcement (ICE) to apprehend and deport undocumented migrants (MacMillan and Dwoskin 2019).
As educators concerned with our own safety, as well as that of our students, we must consider what risks future relationships between AI technologies, warfare, and surveillance practices might pose. These concerns are heightened given the capabilities being built into existing generative AI models that allow for forms of surveillance in the name of safety. In a recent conversation on ‘How AI Will Revolutionize Education’, Sal Khan (2024)—founder of Khan Academy—talked about his organization’s newly developing AI-for-education tool ‘Khanmigo’ where he discussed some of the safety features in place. For example, if a student asks the AI model a question such as ‘how do you build a bomb?’, the interaction would be flagged (Khan 2024). In one interaction, Sal Khan recalled how his daughter, while using the system, asked the AI model how to build a project to beat the other team and consequently her teacher was informed (Khan 2024). Though it is possible to imagine improvements in future generative AI models that are able to better avoid such misinterpretations, it is the very mechanism of surveillance itself that remains a point of concern.
These concerns are magnified in our current context where right-wing xenophobia, white supremacy and an ever-growing misogynist and transphobic politics continue disseminating themselves within our social institutions and digital realities. These current developments have marked negative impacts on the practice of education. The banning of “controversial” books such as Alice Walker’s (2013) The Color Purple in U.S schools and the added pressure on university educators who incorporate critical race theory into their classes raises concerns about student development and the continued cultivation of critical thinking (Moody 2022). In line with the general clawback of the humanities and social sciences at institutions of education at all levels, there is also an attack on critical learning, thinking, and being. Now more than ever, it is imperative for educators to maintain the integrity of their practice and to reimagine what critically engaged teaching and learning can look like in a rapidly changing environment.
Conclusion
The public availability of novel technologies such as the world’s first public railway system, the invention of the printing press, or the personal computer have always initiated moments of change in the social world. Generative AI too will continue to have an impact on how we as humans engage, work, and communicate with each other. These impacts have a marked effect on the practice of education and in institutions of higher learning where forms of evaluation and skill development such as the art of writing, the practice of research, and dialogic exchange are increasingly passing through the authoritative and efficient channels of generative AI.
Like any other tool, generative AI has the ability to make certain tasks redundant, particularly those that are harder to justify among current cohorts of students who are already struggling to feel engaged, invested, and inspired by their higher education experience. The marketization of higher education and the treatment of students as sources of revenue taught to by underpaid and overworked contract lecturers is having its eventual impact. In such a context, demanding an undergraduate student, who may be working full or part-time to financially support themselves, to write a 12-page term paper on material that feels far removed from their lives, and for a course that they do not feel engaged in, is a big demand. It is even harder to convince them not to turn to an untraceable, presumably intelligent, and instantaneous generator of text that will simply do it for them.
Short of reforming institutions of higher education along the lines of an ethic that prioritizes support and well-being for both students and instructors, generative AI can and has been utilized to help address some of the barriers to more critical, communal, and engaged learning practices. As a tool, it can be leveraged to fulfill pedagogical needs rather than have pedagogical needs adapt to it. Some examples of this leveraging, and of the integration of AI models into collaborative and experiential learning assignments, have been noted above. However, as a tool, like any other tool, generative AI is not without its significant limitations, particularly with respect to its potential in fostering critical and engaged learning. On the one hand these limitations can be considered inherent to the form and functions of the tool itself— whether about its data collection processes, its pre-programmed biases, or its disembodied and depersonalized dialogical abilities. On the other hand, significant limitations spring from the conduct and character of the privatized industry from within which generative AI continues to develop and of some of the risks this poses for those who engage with it. Despite the uncertainties of the moment that we as educators find ourselves in, it is nevertheless a moment of crisis that can be utilized to reimagine critical education practices and push back against predominating pedagogical practices that are no longer tenable in our current age.