Translingualism
Approaching Language from a Global Perspective
Kamal Belmihoub and Lucas Corcoran
Language is a tough one. Most of us feel, and rightly so, that we already know what it is. We expertly use language every day at school, at home, on the subway. Yet if someone were to approach us at a party and ask for a concrete definition of language, many folks, the authors of this entry included, might stumble for an answer.
On the other hand, we might respond off the cuff with something like: “Well, language is what we speak—English, Mandarin, Urdu…” But if this pesky questioner continued with a follow-up, such as: “Well, what is English?” or something even more confounding like: “How do the sounds of words relate to things they are supposed to represent?” most likely, we’d ask this person to leave the party.
The discipline of linguistics—the study of language—however, begins in such questioning. Debates over what language is have been going on for millennia, and different historical periods and different cultures all have offered their own answers to the question. Linguistics, though, as a field of inquiry independent from philosophy, began in the early 20th century. In this sense, the term “linguistics” most frequently denotes a specialized scientific way of thinking about language.
Unlike other subjects that deal with language, like literature or rhetoric for example, Traditional approaches to linguistics are not concerned with the cultural uses of language but only with formal definitions of it. Why is this important? Well, often the way we study, teach, and learn language has frequently looked to th
is type of linguistics for its model. Language turns into a body of knowledge instead of social activity. Because of this way of thinking, educators, too, have traditionally assessed students on what they know about language instead of what they can do with it.
As a major study of formal conventions, what is now known as structuralist linguistics began in the early 20th century with the French linguist Ferdinand de Saussure, who illustrated a new way of thinking about language that still exerts a powerful force today. Structuralist linguistics considers languages as essentially systems. We might compare this way of thinking about language to how computer coding works today. A language system has variables and rules for organizing those variables in meaningful ways. Each language is accordingly defined by the variables and rules unique to it. In many ways, this is a very compelling picture.
For example, when adult language learners pick up a textbook in order to learn Arabic, they are very much learning a language as a system. For adults with no experience in Arabic, the only way to study is through learning the meaning of words, what linguists call semantics, and the rules for organizing those words, what linguists call syntax. From the perspective of adult language learners, the process of learning Arabic might often feel as if they were “uploading” code into their minds. The better they can model their learning processes to act as if they were a computer, so it would seem, the quicker they could learn Arabic.
However, as many parents or people with younger siblings already know, how children learn a language is quite different. Kids—in the living room, in the classroom, or in the schoolyard—don’t seem to need any explicit rules for learning a language. They just pick it up. And, often to the chagrin of high school writing teachers, they often are perfectly capable language users who can give no clear explanation for why they used one linguistic feature instead of another. They just do it, in the same way that most able-bodied people don’t have to think about walking up a flight of stairs or chasing after their hat if the wind catches it.
Notable linguist Noam Chomsky tried to explain this phenomenon in the mid 20th century by saying that all “normal” speakers must have a Universal Competence for speaking and using a language. In other worlds, human beings have an innate mental capacity to learn and create new language from a seemingly very limited exposure to language growing up. In this sense, Chomsky’s work continued in the structuralist tradition set out by Saussure, since Chomsky concerned himself primarily with understanding language as if each of us had a language computer in our heads. Adult language learners have to make explicit use of this computer in learning a new language, but children up to a certain age do it naturally without thinking consciously about it.
Recently, however, this picture of language has come under fire, as linguists think more and more about the cultural and political roots of language. Although the models of structuralist linguistics explain a lot, they seem to treat the messy human phenomenon of language too rigidly. In day-to-day life, we don’t seem to produce language from our internal language computers, processing and decoding information; we seem to act it out without even a second thought as we respond to complex and culturally distinct communicative scenarios. In this sense, linguistics began to pose the question: What would a model of language look like if speakers weren’t essentially language computers? How could we describe language from an embodied cultural perspective—in other words, from a perspective that thinks about individuals using language, or multiple languages, within specific cultural contexts?
Let’s look at English as an example to answer the questions. English is a highly diverse language and there are more so-called non-native speakers of the language in the world than there are native speakers. The unprecedented spread of English across the globe through colonialism, economic, military, and political strength has brought about the emergence of new varieties of English known as World Englishes. It has also spearheaded a new understanding of language and communication. Concepts such as communicative competence (what does it mean to be able use a language?), native speaker fallacy (how do we define who’s a “native speaker”?), and intelligibility (What’s the difference between “correct” usage and understood usage?) are essential for the new understanding of language.
All of us have internalized some notion of what it means to speak a language “correctly.” Often our parents spend lots of money to ensure that we grow up speaking English, Spanish, Mandarin, or any other named language in an appropriate fashion. However many linguists today reject the idea that there are “correct” and “incorrect” versions of any language. Instead, they argue that there is no difference between a language and a dialect and all languages and their varieties are equal. Certainly some language varieties carry more weight in different situations. For example, on the job, if you don’t speak “proper” English you may face the very real possibility of getting fired or missing out on a promotion.
However, from a linguistic perspective, all languages are equally suited for making meaning in the world. An adage that illustrates this is that language is a dialect with an army and a navy. That is, languages that are seen as superior are not inherently superior, but became superior thanks to their economic and political strength. The rise of America as a post-World War II superpower demonstrates how a language like English can spread rapidly. More recently, the availability of the internet has allowed speakers of many languages to draw upon their diverse linguistic repertoire to communicate for a variety of purposes with diverse speakers.
Sometimes two or three languages, even if the speaker is not entirely proficient in them, are used in the same conversation, and each of them serves a valid function. What allows speakers with diverse linguistic repertoires to communicate is referred to as communicative competence. Communicative competence includes linguistic competence (e.g. ability to choose appropriate grammar and vocabulary), strategic competence (e.g. ability to use body language and the environment to solve communication breakdowns), discourse competence (ability to produce a coherent message), and sociolinguistic competence (e.g. ability to use language in culturally appropriate ways).
Developing these competencies allows speakers to achieve intelligibility. Interaction among speakers with diverse linguistic backgrounds is possible when the focus is on understanding each other and being mutually intelligible rather than being correct and speaking like a native speaker. The myth of the native speakers has by now long been dispelled. No language belongs to anyone. Anyone can appropriate language for their own purposes.
A famous Arab-Berber Algerian writer writing in French once characterized the French language as a spoil of war that France left after Algeria won the war for independence against the former colonial power. Language is a tool that can be appropriated by anyone and used for a variety of purposes.
In addition, native speakers today have no advantage in a global information economy. Those who learned English later in life are more likely to achieve intelligibility among themselves than when communicating with a so-called native speaker. An American, thus, in a room with Indian, Chinese, and European business persons might fall short of closing a business deal. Users of English around the world develop their own norms, depending on their language background. As a result, communication is shaped by the context of communication. While we used to think of a language being spoken either as a second language in a context where the majority speak it or a foreign language in a context where a minority use it as an additional language, we now think of a speaker’s multilingual repertoire as an essential asset when communicating in different contexts.
This new way of thinking of language has revolutionized the way we now think about bilingualism. Using structuralist models, linguists once thought being bilingual meant two monolinguals inhabiting a single body, like two roommates in a cramped New York City apartment. Bilinguals, in this sense, are speakers who speak two different languages like two different “native” speakers. And, if they can’t speak as two “native” speakers in one, they aren’t yet “balanced” bilinguals.
This belief is based on the idea that named languages—like French or German—are separate and distinct things. For instance, in this model, French and German would count as completely separate systems that exist regardless of any particular speaker uses them. However, as structuralist models begin to lose their overall force in the field, linguists have begun to think very differently about bilingualism. Instead of bilingual speakers “switching” between two separate language systems, they select features from one unified and idiosyncratic linguistic repertoire. Linguists call this a person’s idiolect.
According to the most recent scholarship on bilingualism, every speaker has a distinct idiolect composed of linguistic features that are most often traditionally associated with particular geographies: French is spoken in France, German in Germany. Based on the idea of the idiolect, forward-thinking scholars of bilingualism have introduced the term translanguaging into the conversation. This humdinger of a term hopes to represent how bilinguals use all of their language resources in order to communicate in a variety of different circumstances. From a structuralist perspective, bilinguals often “mix” their languages when speaking to other bilinguals. But from a translanguaging perspective, no such mixing occurs. Since all speakers, monolinguals included, strategically select features from their idiolects, it is not possible to mix languages. Seen from the perspective of one’s idiolect, separate languages simply do not exist.
Translanguaging suggests that language is always a negotiation between the linguistic features that speakers already know and the communicative situations in which they find themselves. Although speakers might very well only speak one “language” in a certain situation—English in the classroom, Arabic at home, for example—translanguaging argues that this is due much more to a necessity than to a linguistic one.
In other words, from a strictly linguistic standpoint, all that really exists are the words that any given speaker knows. This view of language also changes the way that educators have come to think about the ways that they teach language in all sorts of different classroom situations. Instead of having students master one distinct language—what it might mean to learn English, for example—a translanguaging approach to teaching means that students are always adding new linguistic features to already established repertories. That is, speakers never start from zero and learn a brand new language. Rather, they add new and different features to the holistic language that they already speak. We can also say that this education is always taking place. Outside of the classroom, speakers are always incorporating new linguistic features from the books they read, the music they listen to, and the people they talk to.
Translanguaging’s shift of self-contained languages to individual linguistic features allows us to think of language as an ongoing process—speakers are always learning new features and deploying old ones in constantly renewing, meaning-making contexts.