Category Archives: language and linguistics

Streamkeepers of the language

A few months ago, a fellow editor, Paul Cipywnyk, told me and other members of the Editors’ Association of Canada about something perfectly awful that had happened. Continue reading

indigenous

This word stays right at the tip of the tongue: nasal /n/, stop /d/, affricate /dZ/, nasal /n/, fricative /s/. Ironic, really, for a word that signifies rootedness.

But it has assorted echoes and overtones that are quite suitable. For instance, igneous, which names a type of rock that has sprung new from the heart of the earth: molten, flowing like the river of life, and then settling and solidifying and becoming a durable part of the ground of its particular place.

And Indian, a term often used for the indigenous peoples of the Americas. They got it, of course, by misapplication from European invaders who at first thought they had arrived in India. The people of India in turn got the name from the river they were on the far side of (from the European perspective), the Indus – the name of which actually comes (altered) from Sanskrit for “river”. Indigenous, on the other hand, comes from Latin meaning “in-born” – as in born in the place. At base, indigenous means simply “native” – as in “born there” – though of course it tends to be used to mean “native” as in “belonging to an aboriginal people”.

Indigenous also has some taste of indignity. In fact, it has gained a considerable popular connotation of having suffered indignities and worse. The various excursions of colonialism and imperialism saw invaders from countries such as England, France, and Spain (just for example) subjugating the indigenous people of the lands they landed on and visiting all sorts of indignities on them as they stripped them of their lands, indignities that continue to have repercussions.

It’s no wonder that if one refers to the indigenous people of a given place, it tends to carry an assumption of disadvantage, since indigenous is used almost exclusively in reference to the victims of colonialism and imperialism. We see this attested in many of the most common collocations – and in the images they tend to bring to mind: indigenous peoples, indigenous culture, indigenous rights, indigenous knowledge, indigenous traditions… Oh, yes, knowledge and traditions: those rooted in the soil are often seen as having deep, true wisdom and authentic traditions. Which of course also carries implications about those not rooted in the soil.

And did you notice that indigenous also sounds a bit like and did you know? As in “And did you know that Japan also has an indigenous people?” And “Did you know that Scandinavia has an indigenous people?” And here is where we start to run into the problems with the assumptions that can be carried unstated with indigenous, and why it is much wiser to say disadvantaged when you mean “disadvantaged” and to leave indigenous to mean only “native to the location” without carrying assumptions about sociopolitical status or experience, value judgements (e.g., moral high or low ground), assumptions about wisdom or authenticity, or whatever else one may want to load unspoken on the back of this word. Allow to me look at some examples of indigenous peoples to sort out what I mean.

Let us start with Japan. Japan has an indigenous people, the Ainu, who were in north and central Japan (especially Hokkaido) before the Japanese arrived there, and who were, starting in the middle of the 19th century, subject to a policy of assimilation, which led to considerable loss of culture – and very substantial intermarriage. But, now, how about those Japanese people who assimilated the Ainus’ lands into their country? Well, current thought is that they are descended from the merger of two peoples, the Jomon, who were the first occupants of the southern part of the Japanese islands, and the Yayoi, who immigrated. (This would make them somewhat like the present-day English, who have both Germanic [immigrant] and Celtic [indigenous] ancestry, though the immigrant Germanic has prevailed linguistically.) So in fact the Japanese have some claim to indigenousness in the south of Japan. And they certainly have a strong cultural sense of belonging. But the Japanese are not disadvantaged and so have none of the pull of the underdog. If someone talks about “Japan’s indigenous people,” the odds are very high that they are meaning to say “Japan’s disadvantaged indigenous people of Hokkaido” – as though lack of disadvantage means lack of indigenousness. Yet Japan also has a disadvantaged group of people who are in fact ethnically Japanese: the burakumin. Are they less disadvantaged for not being ethnically different? No, they are not.

In northern Scandinavia, there is also a disadvantaged indigenous culture: the Saami (also spelled Sami or Sámi). You may have heard them called Lapplanders, but this is not their own word and it’s not all that well liked by them. The Saami, traditionally, are reindeer herders, and the nature of their cultural circumstances led to some resemblance to Plains Indians (e.g., Sioux tribes – Dakota, Lakota, Nakoda) in some ways: their dwellings looked like tipis, for instance (I use the past tense because generally they live in houses now, which should not be surprising), and some of their artistic output is similar, too (I recommend giving Mari Boine and some yoiks a listen to see what I mean).

But while there are cultural differences between them and other Scandinavians, they don’t actually look different. That, however, has not spared them bad treatment. And who has treated them badly, historically? Other Scandinavians – on the one hand, Finns, and on the other, Norwegians and Swedes (oh, and on the third hand, northern Russians). But here again we have the problem that the Finns have been in their part of Scandinavia quite possibly for as long as the Saami were in their part – since the last ice age. And the Norwegians are indigenous to parts of Scandinavia too, though they may only have been there for a couple of millennia. And they are not indigenous to the parts the Saami are indigenous to – and in their national dominance they haven’t always been very nice to the Saami, either.

You will see that when it comes to who’s indigenous, it’s more a question of who was there first rather than how long they were there. For instance, the Maori are the indigenous people of New Zealand. But the Maori have themselves only been in New Zealand for less than a millennium. (Nor do they have a cultural tradition of having always been there; rather, there is a knowledge of having arrived there in canoes.) Before they were there there were indigenous fauna and flora (some of which are now extinct due to human activity). But the Maori are the indigenous people; they were the first people there. And, yes, they certainly were overrun by the British Empire, with all that comes with that.

Now let me ask you: Who is indigenous in South Africa? Well, not Europeans, we know that (though it’s very important for some parts of the white population that they are descended from the first whites who got there, just as it’s important for some in my own family to have been descended from people who arrived on the first boats from Europe to America and from people who fought for American independence). But not all African people are the same, either.

The first people living where South African now is were the Khoi and San, the “Bushmen” and “Hottentots.” They’re still there, of course. About 1500 years ago, other African peoples from the north (often called Bantu as a group, from which came the Zulu, Xhosa, Sotho, and others) arrived and colonized and grew their own empire. The Zulu empire was not an indigenous empire; they had come from elsewhere on the continent, though there was considerable intermarriage with the indigenous peoples (and some marked linguistic influence). But of course when the Europeans arrived, the Zulu were in turn defeated and subjugated. But does it make it more excusable since they weren’t indigenous? Were they less subjugated, or are they less authentic? Or have they become indigenous because from the European perspective they were “the natives” (you know, “the natives are restless”)? …Or should we be loading all this on the word indigenous? Would it not be better to examine issues directly and name them explicitly? Say that people A were conquered and subjugated by people B? Say that in the modern society of country X, people A are at a disadvantage simply by fact of belonging to people A?

It’s not simply that the assumptions many people carry with indigenous imply that colonial peoples are rootless and without valid traditions or homelands, or that it is the way of the world that people who belong to a place will be subjugated by people who come from elsewhere. It’s also the implication that there is something inherently superior to being there first. Such an implication would make slaves stolen from other parts of the world less deserving than the people who stole them and took them to their homelands. Such implications have also fuelled aggression and hostility. There are certainly places in the world where people have fought long and viciously over who was there first.

And even the linking of indigenousness to subjugation has its dangers. Consider a country that found itself in economic dire straits. Some among its indigenous people chose to blame some other people who also lived in the country – and had for centuries, but were not indigenous. They claimed that these “interlopers” were controlling their economy and subjugating them, and that the way to return to dignity was to purge themselves of them and return to the purity of the sons of the land. Well, we know what happened when that idea took hold of the country – I’m talking about the Nazis in Germany and their persecution of the Jews.

It’s very obvious that the case in Germany was vastly different from the case with colonized peoples such as Canada’s First Nations, or other disadvantaged indigenous peoples such as the Saami, or even disadvantaged non-indigenous people, even if they happen to “run the country”, as in Haiti. But the point I wish to make is that these different cases need to be looked at, and referred to, on the basis of what’s really going on: domination, subjugation, assimilation, slavery… The question of indigenousness is a dimension that should be named and addressed on its own and not be made to carry assumptions and implications. Remember, too: what is at one time an unstated assumption can easily become not just unstated but forgotten at another time. We have the words to speak clearly; let us use them.

This is, admittedly, more of a soapbox than I usually climb onto with my word tasting notes. And let me be clear: I grew up on an Indian Reserve, though I am of European descent (my parents worked for the tribe), and I have a very clear sense of what has often befallen indigenous peoples who have not managed to fight off invaders. (So, for that matter, do the Irish, for instance, who are also an indigenous people subjugated by the English – but those Irish who came to Canada became part of the colonizers.) I am not trying to erase anything. Just the opposite. I am saying we should name it. Be clear. And be careful about the implications and assumptions we make.

Indigenous also carries echoes in its sounds of ingenious, ingenuous, and ignorant. Let us try to be the first of these and not the second or third.

oot & aboot

Canadians who have ever encountered American perceptions of Canadian speech will be familiar with the idea that Canadians say, for instance, “oot” and “aboot” instead of “out” and “about”. What’s up with that, eh?

I mean, really. Canadians can hear each other perfectly well and have no problem telling whether someone is saying mouse or moose. If we walk into a shoe repair shop and say, “I’ve come about a boot,” it doesn’t sound like we’ve just said the same thing twice. Not to us, anyway. But it does to some Americans.

This is due to two things: categorical perception (I’ll get to that in a moment) and something linguists call Canadian raising. No, that doesn’t simply mean we were raised in Canada. What it is is that before voiceless consonants, many Canadians raise the first part of the diphthongs /aɪ/ and /aʊ/ so it is really like the vowel in up. Americans don’t really take great notice of our different pronunciation of the vowels in eyes and ice because there’s no other vowel the ice vowel sounds closer to, but the diphthong in out has moved up to where it falls within the range of sounds the Americans in question process as “oo”.

And this is what linguists call categorical perception. All speakers of all languages do it: a given sound is not always made exactly the same by all people at all times, so we learn (at a very early age) to process whole sets of sounds as the same sound, and we generally take no notice of the differences between sounds in a set. The /p/s in pot and spot are different, for instance – the one in pot has a puff of air after it, whereas the one in spot does not. Hold your hand in front of your mouth as you say both and you’ll feel it. This difference is a phonemic difference in many languages – in Thai restaurants, for instance, you’ll probably notice that there are p’s and ph’s but they both sound like “p” to you. Well, the ph one is like the one in pot, and the p one is like the one in spot.

Likewise, the /l/ in Calgary is quite a different sound from the one in loud, but we tend to take no notice. And of course we know how speakers of many other languages can’t make a good distinction between our beat and bit (Russian acting teacher Sonia Moore referred to sections of scenes as bits, but her accent led her students to think she was saying beats, and that has passed into standard acting vocabulary). And so on.

So while Americans and Canadians both have “oo” and “ow” sounds, the borders between them are different. And many Canadians raise the first part of the diphthong before a voiceless consonant, pushing it into where Americans hear it as a version of “oo”.

But I should say that not all Canadians do the same thing. The ice raising is more widespread – I grew up with that in Alberta. But raising before voiceless consonants is not common with /aʊ/ in Alberta and the rest of the west (especially not in the higher socioeconomic strata) – it’s more standard in Ontario and east. (Do you do it? Say loud and lout and see if you can hear a difference.)

Nonetheless, Albertan out can still sound like “oot” to many Americans, especially northern and northeastern Americans, who use a lower and more front vowel for “ow” (sometimes even more like /æo/, i.e., starting with the vowel in cat) so that all Canadian versions of “ow” sound kind of “oo”-ish to them.

But you know how it is. So many people think they’re the only ones without an accent, and whatever sounds so to them must be so. And this idea among Americans that Canadians say “oot” and “aboot” is so firmly rooted that some Americans won’t even listen carefully. “You’re from Canada? Say out.” “Out.” “He said ‘oot’! Oot! Oot! Oot! Canadian, eh? Eh? Eh? Oot! Oot! Oot!” Really, it gets to sound like apes and jungle birds. Makes me want to give them a boot…

How to explain grammar

Presented at the 31st annual Editors’ Association of Canada conference, Montréal, May 29, 2010

Handout (PDF, 440 KB)

So OK. You look at the manuscript you’re editing, and you see… this:

Adding the ingredients in this order ensures failed chiffon cakes made at home is not an option.

OK, what’s the first thing you do? After sending a “seen in the wild” email to the EAC email list, I mean.

Well, yeah, you correct it, or humbly suggest a correction to the exalted author, depending on the project. But, ah, right then, what is going on here? And what if you make a correction and the author says, “No, it was fine the way I had it. It makes perfect sense to me, and it’s grammatical”?

Well… Continue reading

A new way to be a complete loser

I have just read an article in the New York Times, “The Self-Appointed Twitter Scolds,” about a set of people who have taken it on themselves to correct sloppy grammar on Twitter whenever and wherever they find it. Some even have automated programs that will send criticisms to complete strangers.

This is, perhaps, not surprising, but it is nonetheless disappointing. To think that there are people whose lives are so pathetically devoid of any sense of control or significance that they feel the need to dispense wholesale rudeness personally to anyone who fails to match their idea of grammatical perfection! These people need to go out and buy some manners. Even the cheap kind of manners they can get at discount stores will prevent this. This sort of behaviour is like walking down the sidewalk looking for people who are, for instance, wearing stripes with plaid, or even blue with green, and saying rude things to them about it.

I’ve said it before, and I will keep saying it: The rules of language are made to serve communication, not the other way around. The rules of grammar that we have are a codification of common practices that arose through actual usage, and the point of them is to give people a clear and consistent means of communicating with each other – so one human mind can reach out and come into contact with another human mind. Grammar is the means. The moment it is taken as the end, we have what is now commonly known as a FAIL. To use a Buddhist analogy, what these people are doing is like focusing on the finger rather than on the moon that it is pointing at.

Or let me use an analogy familiar to concert-goers. How often have you been at a concert, or the opera or ballet, and heard someone across the theatre going “SSSHHHHH!” at someone? Tell me, now, how often have you heard the person they were shushing? The SSSHHHHH is louder and more disruptive than what it aims to correct. It is a form of rudeness pretending to be a form of enforcement of politeness.

Likewise, while it may be bad manners to tweet in all caps, it is much worse manners to send a tweet to someone out of the blue carping on their use of all caps. And while making a lot of typos may be a little distracting and may seem to show imperfect concern for the reader, that’s hardly at the level of rudeness shown by those who tweet back complaining about them.

The truth is that no one is a perfect user of English all the time. It’s not really possible, since there are points of dispute such that some people will think one thing correct and others will think a different thing correct. But, more than that, English is not one language with the same rules and structures all the time. It has a variety of levels of usage appropriate to different contexts. (See “An appreciation of English: A language in motion” for some background.) It is as wrong to use formal locutions in a casual context as vice versa, for instance. And certain grammatical “errors” can be a good way to signal a casual, friendly context – don’t say it ain’t so.

More to the point, one thing I have never failed to observe is that anyone who is inclined to be hostile about other people’s grammar inevitably makes mistakes and has false beliefs about grammar. Often the very thing they’re ranting about they’re mistaken about (see “When an ‘error’ isn’t”). But beyond that, you can feel sure that they will get other things wrong even by the prescriptive standards they adhere to, be they idioms, points of grammatical agreement, or what have you. And you can feel entirely certain that they are utterly uneducated in linguistics, having false beliefs about, for instance, what is and isn’t a word.

Am I advocating an “anything goes” approach to grammar, whereby we toss out all the rules? Of course not. I’m a professional editor, after all. If you want to deliver a polished message, you want to make sure that it doesn’t have deviations that will distract or annoy people. There is a reason for having standards – we want to make sure we all have a point of reference so we can communicate with each other. But, again, the point of those standards is to serve communication, not the other way around. They are tools. They are not indicators of a person’s quality. An infraction of them causes no one injury.

And breaking grammatical rules is simply nowhere near as bad as being unspeakably rude to people about their use of grammar. Let it go, people. The English language is not being destroyed by people who make typos. The most damage that has been done to English has been done by people who appointed themselves its correctors.

I’d say that if you want to, you can write it this way

A fellow editor was having a contretemps with a colleague who insisted on putting a comma after that in constructions such as I’d say that, if you want to, you can write it this way and You can see that, the more you know, the more you know you don’t know. The theory is that these are appositives – parenthetical insertions, effectively – and should be set off on both sides by commas.

The two cases cited are actually not identical. When the phrase is integral, one cannot treat it as parenthetical, and so in particular it’s actually incorrect to put You can see that, the more you know, the more you know you don’t know. This would imply that one could have You can see that the more you know you don’t know, which one cannot; the the…the is a coordinated pair (and, for the curious, this the is not actually the normal the but is in fact descended from an instrumental case form of the demonstrative pronoun).

As to the sentence I’d say that, if you want to, you can write it this way, one can indeed remove the if you want to and still have a coherent sentence (if, in this case, a jerky one), and so it can be treated as a parenthetical, but one is not required to do so. That introduces a subordinate clause that can stand on its own syntactically (unless it’s subjunctive), and anything that can stand on its own as a sentence can follow the conjunctive that without a comma. Anything – try it. (Sometimes it’s a bit lumpy, of course, but it’s not wrong.) That includes if X, Y as well as similar constructions such as because X, you can Y.

So you have a choice: either that introduces you can write it this way with the parenthetical insertion of if you want to, or it introduces the whole clause if you want to, you can write it this way. In the latter, no comma is used.

For a thousand years it’s good English, then it’s a comma splice?

I was a bit surprised by a query from a freelance editor I’m working with. She was asking about how to treat sentences of the “First do this, then do that” type. “Adverbial conjunction? Run-on?” she asked. “Truth is, I’m fine with it in informal writing, especially when the two parts are very closely connected. But because so many people consider it a run-on, I usually change it.”

So many people what?

Well, it turns out she’s right. Many people do think that it’s wrong to write, for instance, “I picked up the groceries, then I stopped at the liquor store.” “Comma splice!” they admonish. “Should be ‘…and then.'”

Well, geez. They should have told that to all those educated, fluent people who have been doing it that way for the past millennium or so, so they wouldn’t have been wrong all this time! Continue reading

The onesies

There’s been a lot of discussion about what to call the decade just ending.* But never mind that. What about the decade just about to start, the set of ten years with 1 as the third digit? The one that starts with the last year of the first decade of the third millennium and ends with the second-last year of the second decade of the third millennium? I wish to make a formal proposal: let’s call it the onesies.

Does that sound like something a baby would wear? Yup. Good. After you made your oh-ohs (or done your naughties), you can get in your onesies. Seems to suit the general trend of the world. The infancy of a new millennium… hopefully the best one yet.

I find that onesies is also another name for the game I know as jacks. That’s good: playing pickup while trying to catch the bouncing ball.

And if you’re saying “Why not the teens?” my answer is that the first three years (10, 11, 12) aren’t teens. The teens are a set of seven years – a septennium.


* No, I don’t mean the first decade of the 21st century, which ends a year from now; I mean the decade after the nineties, which overlaps 9 years with the first decade of the 20th century. Yes, we can do that (see “When does the new decade begin?“). I personally prefer the oh-ohs. But the naughties is also good. Some people like to use the spelling the noughties for distinction. I prefer the naughties precisely because of the pun! I don’t like the aughties not only because it’s not such a good pun (even if you spell it the oughties) but because aught originally, and still also, means “something” and came to mean “nothing” just by confusion (a naught –> an aught).

To be a preposition or not to be a preposition

So… is the to before an infinitive a preposition? If you have a sentence, e.g., “He decided to write a blog post on the topic,” is the to a preposition, or is it just a part of the infinitive?

It’s a tricky question, is the short answer. The detailed answer starts with Continue reading

teh

lolz. this is teh l33t. im in ur langwidj eating ur wordz. lolspeak: ur doin it rite. all ur teh are belong to us!!111! i can has noms now?

If you don’t grok the above, I suggest you spend several hours on icanhascheezburger.com, after which you will be laughing too hard to care. And then Google leet.

But what is teh? Oh, come on. You’ve typed it a squillion times. Yes, it’s when your left hand follows the t of the with the e before your right hand can get the h in.

“Pshaw! It’s not a word! It’s just a typo!” you object. And this was true for a long time. People have been typing teh for as long as there have been qwerty keyboards, after all. Blame Christopher Sholes, who devised the qwerty layout (which was later modified by Remington) to prevent typewriters from jamming – it moved apart some pairs of letters frequently typed together.

But now, though not in formal English, teh has taken on a special valence and even, in some versions of typed slang, some extra functions. It all proceeds, certainly, from teh being a typo, one typically made when typing quickly or carelessly. From that it can be a mark of a self-consciously “sloppy” or “incorrect” way of typing.

In lolcat speak – the deliberately “incorrect” usages attributed to cats in those funny captioned pictures (as on icanhascheezburger.com) – it displays the imperfect English use that is yet another endearing feature of our furry oral-retentive friends (so focused on getting their noms – meaning food, because when you eat it you go “nom nom nom nom”).

In leet, an in-group type-based argot favoured by those who wish to claim an elite level of tech savvy, it is a winking in-group usage, like pwn (for own, which in this case is a verb meaning “defeat, dominate, perhaps humiliate”) and typing 1 in place of ! and 7 in place of &. (Leet also does other deliberate substitutions, for instance numbers in place of certain letters – leet can be written l33t or even 1337 – and novel morphology, such as xor, an agentive noun suffix that can also be verbed.)

But in leet, teh can also be used to make the following word a superlative adjective without further inflection, even if the word is a verb (ur teh lame would mean “you’re the lamest”; this is teh rock could mean “this really rocks,” although one might more likely see further modifications to make it, for instance, this is teh r0xx0rz). And because of its self-consciousness, it can add extra ostension and possibly irony to the noun it specifies (you’re teh boss).

So teh has become something on the order of ain’t in its effect as a register marker. And because written language always begs for a way to be said – since the spoken form is the primary form of language and the written form’s first purpose is to represent it – it needs a pronunciation. Teh is usually pronounced just as it looks, but generally with the h silent, and perhaps with the vowel reduced so it’s like the but with a voiceless stop rather than a voiced fricative.

This word also has other little overtones and notes that can be found beyond its rather layered usage implications. For one, since it is a rearrangement of the letters of a word, other rearrangements also play in, notably het, which is one of the Dutch equivalents of “the” (the other is de; het can also mean “it”), and eth, not just an archaic inflectional ending (he maketh; he shibboleth; he smiteth) but a long-disused character in English which could, if it were still in use, prevent the typo that gave rise to this word in the first place by allowing us to write the word with it in place of the th (ðe). (Actually, the was formerly written with a thorn (þ), not an eth, and when both of those characters were dropped because they weren’t in the type sets brought over from the continent, thorn was sometimes represented with a y, giving us ye – still pronounced as “the” and usually written with superscript e – for þe, i.e., the.)

And teh also happens to be the Yale transliteration of the Mandarin Chinese word now (in pinyin) written de, as in Dao De Jing, best known in its Wade-Giles version, Tao Te Ching, but in Yale rendered Tao Teh King. And what is this teh? Virtue, also described as strength, power, integrity, etc. If u has virtue, ur not just teh king, ur teh l33t!