This is the text of a presentation I made to the Toronto branch of the Editors’ Association of Canada, Sept. 24, 2007. Certain parts were sung; you can guess which.
It ain’t necessarily so, no,
it ain’t necessarily so,
the things Strunk and White
want to tell you are right,
it just ain’t necessarily so.
Getting pissed off about grammatical errors is a favourite activity of a surprisingly large portion of English speakers. Look at Eats, Shoots and Leaves – everybody just loves a good rant, and plenty of people upon reading it seemed ready to go to their local greengrocer, if not with tar and feathers, then at least with magic marker in hand to fix those awful apostrophes! Somehow, in English, a grammatical failing takes on the dimensions of a moral failing – or certainly in any case an intellectual one. And we all know what it feels like to see some egregious error and be outraged. Outraged! Of course, all this is bread and butter for us. But we also know that some things that most people think are right are not – not going by the standards we uphold, anyway. And we know that some things that many people think are wrong – and can get quite indignant about in some cases – are not wrong at all. (And some of us enjoy getting quite indignant about those fools who go around insisting things are wrong that aren’t wrong at all! Boy, people who think they know everything are very annoying to those of us who do!)
And we know that sometimes we have to make a judgement call. Do we use “they” for the third-person singular? What if “he” would be seen as sexist, “he or she” is distracting and inelegant, and rephrasing is not possible? Sometimes – more often than perhaps we think – we are the ones who decide what is and isn’t right.
And who decides it the rest of the time? Well, really, everybody. And all the time. Language is a constantly evolving thing. It never stops changing – or if it does, it’s dead. Dictionaries make judgement calls and certainly influence usage, but they’re not legislation. The presence or absence of a word in a dictionary does not determine whether the word exists or not. Dictionaries document existing usage. They’re like field guides. If a birdwatcher sees a bird that’s not in the field guide, does he decide that it doesn’t exist? “Look, that’s not in the book, so it’s not a bird. Don’t take a picture of it! Stop looking at it! We’re birdwatchers, and that’s not a bird! If it were a bird, it would be in here, and I can’t find it in here!”
The analogy does break down, of course, because people who use language are guided by dictionaries. Also, dictionaries don’t say anything about syntax. But you get the point. English is a messy thing. There’s no single authority, just a lot of more – or less – astute analysts, describers, and guides. And, broadly, they are often divided into two camps: descriptivist and prescriptivist.
There’s a reasonably well circulated web newsletter called The Vocabula Review. Its motto is “A society is generally as lax as its language.” This is a popular prescriptivist position. It’s also bullshit – there’s no documented correlation between laxity of language and laxity of society. Hell, anyone here ever tried to learn Latin? Would you call it a lax language? Anyone know about the ethos of the society that spoke it? Anyway, how the heck do you really measure laxity in language? You have to pick a set of norms that it shouldn’t deviate from – but it’s a fact of language that every language that anyone has ever spoken in all of recorded history has evolved from a previous version of that language and ultimately from something not even recognizable as the same language. And even slang has regular rules, albeit different ones. So where do you draw the line?
In fact, it gets better. One line that prescriptivists have danced on both sides of in determining what’s proper English is whether we should be adhering to the usages of our virtuous forebears, or correcting the faulty usage of people who have been speaking the language sloppily for centuries. Correcting faulty usage? Why, yes: nobody realized that double negatives and double superlatives were wrong until someone in the 18th century pointed it out to them. “Oh, most unkindest cut of all!” Adhering to the virtuous forebears? Anytime someone tells you that a word has to mean what people used it to mean a hundred years ago is making such an appeal. (In reality, etymology is not a suitable guide to current meaning. “Awful” used to mean “worthy of respect,” for instance. “Dinner” in its origins meant “first meal of the day.” “Silly” originally meant “blessed.” “Throw” and “warp” have traded places, semantically.)
Now, let me ask you: what’s better, talkin’ or talking? Hearing, or hearin’? We would all say that “-ing” is more proper than “-in,” and might talk about how déclassé it is to drop the g… though there’s no g to drop. It’s just fronting of a voiced nasal. But in the 17th and 18th centuries, proper pronunciation was “talkin.” If a person had said “talking,” it would have been as wrong as saying “tall-kin.” The pronunciation was reverted atavistically to a much older model because the spelling had never changed, and certain schoolmasters considered it more correct to say it the way we say it now. And they taught it, and it stuck.
And this gets us to the real meat of the matter. Is it actually better or more correct to say “talkin”? Not now. Now it’s more correct to say “talking.” We may or may not agree with how or why that changed, but in modern English, the formal standard is “talking.” Why? Because everyone knows it is. And that’s kind of how it works.
Everybody knows that’s the way you say it
Everybody knows i-n-g is “ing”
Everybody knows if you say it wrong
Then you don’t really know a thing
It’s so obvious it’s automatic
Like the silent p in pneumatic
That’s how it goes…
Everybody knows.
Of course, if you don’t like it, you can beg to differ. You can insist on saying “talkin” even in formal contexts, no matter that everyone thinks it’s slangy. Act like they’re the ones who don’t know. Maybe it’ll catch on. Not all language change is gradual and organic; some of it is quite deliberate. And all users of a language have the right to have a say in what usages and rules to keep, and what to discard.
Nonetheless, there is a set of generally agreed-on standards, and some users are generally accorded more authority than others. For us editors, there are undeniably standards that we are here to enforce, and have good reason to enforce, and there are usages that by these standards are undeniably right and others that are undeniably wrong. But it’s not hard-line and simple-minded. It’s organic, and for many users it’s not conscious. There are rules, but there is also flexibility. Some things are right in one context and wrong in another. And some popular prescriptions and proscriptions have no good basis, no truly valid reason to be treated as rules. Not according to the standards we supposedly adhere to and according to good linguistic sense.
But how can I just stand up here and say that? What are these standards? Where do they come from? Time for a little sociolinguistics.
A standard dialect is a dialect that is widely accepted among speakers in a given country or similar grouping as the most “correct” and accorded the most prestige. It’s what’s taught in schools – if they teach anything. It’s not the only version of the language spoken. There are variations on it by context, and many social or geographic groups speak dialects that are not the standard. “Ain’t you neva hoid a dat?” These other dialects have every bit as much structure and regularity, though the specifics are different. But people who speak nonstandard dialects tend to be thought of as more friendly and more humorous but less intelligent, organized, and affluent, and their dialects are seen as less perfect, less formal, less pleasing, et cetera. Why is this so? Well, people who have learned to speak the standard properly are more likely to come from higher social classes with more money, more education, and so on, because that’s whose dialect is set as the standard.
In English, the standard – which has now split into different standards by country – came to some extent from the nobility in the region around London, but was really given fixity by bureaucrats in the Medieval and Renaissance periods. In French, by the way, the standard was set by royalty, and, as with English, emanated from the capital city; in Italian, however, the standard comes from the recognized great works of Italian literature, which were written in the dialect not of Rome but of Florence. Also, a standard need not have arisen in a purely organic fashion; it may have been deliberately constructed and imposed as such. But a standard always has a connection with education and power. That’s how it gets to be a standard.
A standard, of course, has to have clearly set out means of identifying and prescribing “correct” usage. If it’s a rule, it has to be followable – and teachable. So standard dialects tend to include items and structures that are less natural-seeming than you get in the vernacular, because they are all subject to conscious analysis. Now, that analysis has as its object the natural language (of the “right” set of speakers, of course), but sometimes the analysis is mistaken. And the rules for standards are sometimes deliberately and artificially elaborated in order to make the standard conform to certain fairly simplistic ideals. English, for instance, has had certain “rules” imposed on it on the basis of appeals to logic, classical models, and earlier versions of the language. I’ll be giving some examples anon. A result is that sometimes some people decide the standard has – or should have – features that in fact aren’t in keeping with the way even rich, educated people speak. Features that have a simple-minded appeal but don’t necessarily help communicate or feel, well, natural.
And, on the other hand, the language continues to change in more organic ways, just as fashions and manners change. Standard English may trace to medieval law clerks, but we don’t now speak like medieval law clerks. So what is this “more organic” evolution?
Well, in the main, language is known by the company it keeps. A change is more likely to be accepted generally if it’s been accepted by the people we look up to, want to emulate, want to be with, or feel intimidated by – in short, it’s a status game, and language that is associated with a transactionally higher status is more likely to be retained.
Let me clarify that a bit, though. People may assign higher status – may look up – to persons who are not of a higher socioeconomic class. In fact, some users will very specifically reject any usage that’s too “high-class.” That’s why not everyone speaks the same way even if they’re all exposed to the same educational materials throughout school. There’s a question of identification, of group solidarity. And sometimes people who are not part of a group will take on some of its mannerisms – not just speech but also clothing and so on – if they want to identify with it. Think about all those suburban white kids who try to dress and talk at least sort of like urban black kids. Why? Because hip hop and other urban music has a sort of reverse prestige (well, there is also the huge amount of money those gangstas make).
But “What up, homey” isn’t likely to make the schoolbooks any time soon. It’s part of a dialect that is very pointedly nonstandard – it doesn’t want to be standard; that would ruin it. Nonetheless, certain usages from that dialect may filter into standard speech. It may be that some people who like the music also have day jobs in offices. It may be that it is talked about and represented enough on TV that certain terms are adopted by other users who find them useful. It may be that the formerly counterculture idiom becomes mainstream – today’s annoying youth are tomorrow’s adults, and some of them will be in positions to set the standard. School teachers can often be heard using “cool” as a term of approbation, which used to be terribly slangy. It’s still considered colloquial, but give it time.
And that’s basically how a usage gets into the hallowed halls of power. It’s useful somehow to the right people.
I’ll expand on that a little. People change language for two main reasons: one, to make their lives easier, and two, to make themselves feel better.
A change can make your life easier by making something easier to say – that’s why pronunciation changes. It can save several words – that’s a big reason why we verb nouns: take the verb “cloud,” as in “This clouds the subject somewhat.” It can fill a gap – give us a needed new term. A change can make the language easier to use – English has lost a lot of inflections, and now we find singular “they” gaining a foothold for the same reason. A change can also make the language clearer – and sometimes this actually increases the level of effort.
A change can also make a user feel better. By being fun or clever, for instance. A lot of slang comes about from this, and occasionally that slang makes it into standard speech. But a lot of technical jargon comes about at least in part to make users feel better, too: users want to have a sense of belonging to some important group, which is aided by excluding others. It’s not just for reasons of precision that doctors say “sildenafil is contraindicated in hypertension.” Talking that way also makes a person feel more important and smarter. That’s also how many of the most grating bits of business-speak come about: someone starts to use a new buzzphrase because it sounds important and catchy, and then everyone else wants to use it too.
A change can also make a person feel better by organizing their world more to their liking. This is the origin of certain rules that were intended to “tidy up” the language. Language is a way of making sense of the world, and we like it to work in ways we can grasp. You know how, when you were a kid, if you said something another kid didn’t understand, the other kid would call you dumb? Some things in English have been called dumb by people who didn’t understand how they worked. This leads us to another force for change in a language: error. Misconstrual. Actually, a fair few things that are well accepted now came about through misanalysis. For instance, if you wear an apron to eat an orange or a pea, it is misanalysis that has resulted in your not wearing a napron and eating a norange or a pease. This also comes into play in some of the cases I will bring up shortly.
By the way, a lot of resistance to change also comes from not wanting one’s organized world to be disrupted. And also because the changes are coming from “those awful people” – you know, those idiots who don’t speak English right. Obviously. Because they keep changing it.
Sometimes multiple factors are involved in language change. “Impact” as a verb seems more vivid than “affect,” and it’s a term that important businesspeople are using, and it saves the user from having to remember whether it’s “affect” with an a or “effect” with an e they want.
This also means that sometimes changes happen because someone wants to sound smarter but doesn’t actually know how to use a given word or phrase. And then it catches on. And sometimes someone will make an error, and someone else will see it and think it looks odd and conclude that it must be correct or the person wouldn’t be using it. Insecurity is a great force for change dissemination! Remember how standard dialects tend to use less natural-seeming spellings, usages, et cetera? Boy, that keeps people confused and uncertain and willing to accept weird things if asserted with enough authority! And because English is rather capricious, and many people really aren’t 100% sure, it allows intimidation and arbitrary rules to have a lot influence. It’s why po-faced hard-line prescriptivists, many of whom are people that just about no one would want to be like or particularly look up to as people, can manage to keep so many people afraid to say things in perfectly natural ways. They have an air of authority. They sound like those teachers that kids were terrorized by. (Actually, I’m not sure if kids are terrorized by English teachers much anymore. I think it’s come to the point where a lot of us are wishing they would terrorize them a bit more…)
But we, we also have authority.
We few, we happy few, we band of editors;
For those today that shed their ink with me
Shall be editors; be they ne’er so vile
This day shall gentle their condition:
And writers and correctors now a-bed
Shall think themselves accurs’d they were not here,
And hold their red pens cheap whiles any speaks
That fought with us upon Saint Crispin’s day.
Oh, sorry. Stratford is down the road? Right, well then. The point is, we’re in a position, as I’ve said already, to have some effect. There are many cases where we just can’t change what is, but there are also cases where we can. For instance, if there is something that is often believed to be incorrect but is nonetheless said and written by many, we’re in a position to endorse it. At least sometimes. And use it, and say, “Once more unto the breach, dear friends, once more, or close the language with our editing dead!” Well, or something like that.
And I think, given all that I’ve just said, that it would make more sense to evaluate change not as right or wrong but as useful or not – keepers and tossers. So what do we keep? I say a change is worth keeping if it lets you do more with the language – if you can express more meaning, express things more clearly or efficiently, have more fun. A change is not worth keeping if it serves mainly to limit what you can do with the language.
Now let’s get down to the meat and potatoes. Let’s talk about some specific “errors” that aren’t necessarily. You know,
It ain’t necessarily so,
no, it ain’t necessarily so.
The proscription of split infinitive
is by no means definitive…
[beat]…I just want you to really know.
Split infinitives! Everyone’s favourite. Now, in Old English, one couldn’t split an infinitive – infinitives were one word. The word “to” was used just before a specific “inflected infinitive” form that was used in a limited set of situations. But during Middle English, a lot of the inflectional endings were lost, including the suffix that denoted the infinitive, and the inflected infinitive form with “to” before it came to be standard. But the “to” doesn’t travel with the infinitive everywhere: we all should to know that. At any rate, certain self-styled English experts who just happened to be in positions of influence decided that if Latin didn’t split its infinitives (of course it doesn’t; they’re one word), then English shouldn’t either, and so the “to” and the root should never be separated. But does this improve the clarity of English? Does it allow for better communication? If we allow so-called “split infinitives,” we can allow different meanings for “to really do something” and “really to do something,” for instance. We can also avoid some annoyingly difficult phrasing on occasion. Here’s an example of a useful split I found in a document recently: “We will keep pushing you to constantly increase your limits.” Move the “constantly” and it can be read as modifying “keep pushing” rather than “increase.” The main value of the supposed rule is that it allows “those who know” to set themselves above “those who don’t.” But does that serve communication? Wouldn’t it be better to look at it case by case and decide where the insertion of the adverb after “to” is just lumpy and inelegant and where it’s elegant and nuanced?
Another very popular bugbear with a Latin basis is the idea that one should not end a sentence with a preposition. This one seems to have had as its most important early proponent the late 17th-century dramatist and author John Dryden. It was also strongly advocated by Robert Lowth, an Anglican bishop and professor of poetry at Oxford, in his late 18th-century Short Introduction to English Grammar, which has been a prime vector for several of these sorts of ideas; it was used in various adaptations in schools until the early 20th century. This is his pronouncement on the sentence-ending preposition: “This is an Idiom which our language is strongly inclined to; it prevails in common conversation, and suits very well with the familiar style in writing; but the placing of the Preposition before the Relative is more graceful, as well as more perspicuous; and agrees much better with the solemn and elevated Style.” He says this after quoting passages with final prepositions from Shakespeare and Pope, who really were better stylists than Lowth, I think. On the basis of Lowth’s observation the rather more simple-minded proscription became dogma, and although it has long been confuted, it has persisted to a fair extent even to the present day.
Now, can you start a sentence with “but” or “and”? But of course! And why not? But should you? Doesn’t it start a sentence halfway through? Pick it up in the middle of nowhere? Isn’t it simply awful English that proves the user an uneducated lout? Well, if it is, there certainly are a lot of uneducated louts among the best, most revered authors and most literate people in the history of the English language, including Shakespeare and the scribes of the King James Bible. And exactly what harm does starting a sentence with a conjunction do? And what do we gain from forcing people not to use this useful little narrative device?
Mind you, perhaps we shouldn’t hold up Shakespeare as an example. It turns out he made all sorts of horrid, illiterate errors! O most unkindest cut of all! Now how could he get away with a double comparative like that? Well, because there was nothing wrong with double comparatives at that time. It was a form of concord – in many languages, words match each other in form (for instance, masculine adjective for masculine noun), and English used to do this too, not just with genders but also with comparatives, superlatives, and negatives. But we’ve lost the idea of concord, and this use now seems redundant and unnecessary. This was not exactly an organic change – it was decided and taught – but it’s accomplished now. Do we have sufficient energy or motivation to go back to allowing it? You tell me what it would add, aside from occasionally facilitating iambic pentameter.
On the other hand, double negatives do get used. They get used in slang and other nonstandard dialect speech. In fact, they’re almost a marker of it. Sometimes negative concord is used for emphasis and sometimes it’s just the natural usage in the dialect. Here’s one heard on a court TV show: “Didn’t no kid ride no bicycle into no house!” Negative concord was used by Chaucer, and even in slightly less obvious places by Shakespeare: “The man that hath no music in himself, Nor is not mov’d with concord of sweet sounds, Is fit for treasons, stratagems, and spoils.” And negative concord is a grammatical requirement in many languages – in fact, Latin uses it. Latin? Say it ain’t so, Joe! Yes, the grammarians who proscribed some features of English because they weren’t enough like Latin also proscribed this feature, even though it is like Latin. Why? Because it’s illogical – two negatives make a positive, right? Well, remember that the English standard was set by law clerks, and legal defensibility is still an important factor. It’s clear enough that this feature is emblematically not standard English now. You tell me: is there reason for allowing it in standard English? (On the other hand, could we even if we wanted to?)
Oh, and ain’t it funny – whenever we bring in slang, nonstandard dialects, double negatives and so on, we always seem to run into “ain’t.” Now, what is it that some teachers and grammar pedants like to say? “‘Ain’t’ ain’t a word!” Do you know what that means? Yes! It means that those teachers and grammar pedants don’t know what is and isn’t a word! Here’s a tip: if something is used as an independent lexical unit, and the person who says it understands it, and the person who hears it understands it, for them, it’s a word. And if hundreds of millions of people use and understand it as part of their English speech, it is an English word. If they point to a dictionary and say “ain’t isn’t in there,” it means two things: a, they have a crappy dictionary, and b, they don’t know the nature and function of dictionaries.
But! We all know that it’s not proper English. In fact, it’s an important marker, one of the best, for slang – use “ain’t” and people know you’re being slangy, folksy, informal. In fact, it’s had that status since the later 18th century, not long after its use was first recorded. It seems to have come from “an’t,” which is contracted from “am not” or “are not” – but it’s gotten around a lot more, as we know, standing in not only for all negations of “be” but also for negations of “have.” How did that happen? I ain’t got no idea. (Listen! Negative concord! Could I say “I ain’t got any idea” and still sound right?)
Could it be used in standard English? Sure, for the first-person tag question: we say “Aren’t I” usually, or “Am I not,” but “Ain’t I” would be better in some people’s opinions. But would that be as useful as having a one-word marker of informality? Perhaps sometimes we need a word to be nonstandard – the standard needs nonstandard speech too. And it’s so easy to toss it in and set the tone! Isn’t it grand? I mean: Ain’t it grand?
Anyways. Moving on… OK, wait, how many of you cringed when I said “anyways”? And why? The usual response is that “any” takes a singular, so it must be “anyway,” not “anyways.” This, however, is a reanalysis of the word. The s isn’t a plural marker at all. It’s actually a genitive marker – the same s you see on “yours,” for instance. “Anyways” means “of – or by – any way.” We just don’t happen to freely form genitives for that sort of purpose anymore. They only survive in productive use as the possessive form, which has gained an apostrophe through misanalysis. So “anyways” isn’t wrong, actually, and it isn’t ill-formed. But, on the other hand, “anyway” is common and accepted now. Should we just let it supplant “anyways”? You tell me. Keep… or toss?
All right. Oh – is that “all right” or “alright”? How many of you cringe when you see “alright” as one word, a-l-r-i-g-h-t? Now, why is that, when you don’t cringe at “always,” “almost,” and “already”? Well, I can tell you. It was thought for a time – by me too – that “alright” came straight down from Old English “alriht,” just as the other “al-” words had come down. But it seems that in fact the one-word version we know is a backformation on the basis of the two-word version; the Old English “alriht” meant something not quite the same. So when it became common a century or so ago, it looked wrong and uneducated. And it is, therefore – and many of you will know I’m correcting myself on this – an innovation… of a century ago. But! Is there a use for a distinction between “all right,” two words, meaning “all correct,” and “alright,” one word, meaning “OK”? If a mother asks her son how he did on his test, isn’t “all right” different from “alright”? Seems useful to me. Of course, we do have an obstacle. Let me quote The New Fowler’s: “The use of all right, or inability to see that there is anything wrong with alright, reveals one’s background, upbringing, education, etc. perhaps as much as any word in the language.” Using the one-word version tends to have about the same effect as picking your nose in the presence of others. So what do we do? Is this distinction headed for acceptance at long last? Can we nudge it that way if we want? Or are we just shooting our hunting dog if we try?
And on the other hand, if we all get together against a usage, do you think we can decimate it? Wait, wait, wait – when I say “decimate,” what do you understand by it? Reduce to one tenth or reduce by one tenth? OK, what’s the original meaning? Reduce by one tenth. The classical Greeks used it to terrorize invaded villages – they would line up all the men and kill every tenth one. But the much more common meaning now is more in the line of reduce to one tenth. I’m guessing that there’s a greater need for that meaning, and the word perhaps seems to modern eyes to mean that. Certainly that was the first meaning I knew, and it was years – quite a few years – before I learned the original sense. So, now, what do we do here? Are we going to go around smacking everyone who uses it in the newer sense? But remember that there are many words that have changed meaning in English. On the other hand, there are people who know the original meaning and prefer the word for that, so we have to be aware of that when thinking of keeping it as the newer sense. Or we could get rid of it. But replace it with what? Is there a truly suitable equivalent waiting in the wings? This is where we earn our money, folks…
OK, I’ll go to an easier issue. What’s the rule in English for the indefinite article – where you use “a” and where you use “an”? It’s “a” before any consonant sound – so “a universe” – and “an” before any vowel sound – so “an hour.” The article is always determined by the pronunciation. This is one of the most consistent, reliable, and inflexible rules in standard English. In fact, we all know that someone who drops his h’s adds the n – “an ‘orse” versus “a horse.” Only in some nonstandard dialects do we hear “a” being used everywhere – a famous one is from Oliver Twist: “The law is a ass.”
Where am I going with this? Well, there’s a historic problem. I mean that in more than one way. There was a time in English when word-initial h’s were quite commonly dropped under the French influence. When the pronunciation was restored, many printers still went with the old usage and treated words beginning with h as beginning with vowels, and put “an” before them. This practice gradually fell by the wayside… except it has hung on with a small few words in some circles, and with just one – well, two – fairly widely: “historic” and “historical.” Some people insist ardently that “an historic” is correct, and may even justify it by saying that the “an” causes the h to disappear or that the h is really not fully there. Well, now, if we say “historical,” it is there, and more importantly, the word determines the article, not the other way around. This word is a good example of how the exceptional forms – “marked,” as linguists call them – are often assumed to be correct. It is an exception that has been learned, and if you haven’t learned the exception, then you’re not the right sort. Well. This is shibboleth thinking pure and simple and adds exactly nothing to the language. I stand on this issue where style guides and usage experts have stood solidly for more than a century, even if ignored by some self-appointed experts: “a historic.” But it’s an interesting illustration of how long it can take for some things to process through language – a majority of some 200 speakers I polled still think “an historic” is more correct, even if they use “a historic” usually.
Say, this is a fun subject, isn’t it? …Does anyone object to the phrase “a fun subject”? I’m told that in formal use, “fun” can only be a noun. Frankly, any kind of formal use like that doesn’t sound very fun at all to me anyway… Can someone tell me what English gains by disallowing “fun” as an adjective? I think the language wouldn’t be quite as fun without it. It would also be stuck with “fun-filled,” “enjoyable,” or “entertaining,” the replacements suggested by the Oxford Guide to Canadian English Usage. But “fun-filled” sounds like adspeak, “enjoyable” is more moderate, and “entertaining” is more from the observer side than the participant one. Who here feels that readers would react badly to “fun” as an adjective?
Conversion is actually one of the great glories of the English language. We have a great ability to convert nouns to adjectives, adjectives to nouns, nouns to verbs, verbs to nouns, adverbs to adjectives, adjectives to adverbs… And most of the time we don’t need to add an ending at all, though we often do when making an adverb. For instance, take the word “slow.” In Old English, adverbs were made by adding –e to the end, so the adverbial form was “slowe.” In Middle English, short, unstressed, vocalic inflectional endings were lost; to form new adverbs, we had to use the –ly that came from “lic” meaning “like.” So “slowe” became “slow,” as in “go slow.” Yes, that’s right; the –e was lost and “slowe” became “slow.” Got a problem with that? Shakespeare didn’t – “how slow this old moon wanes”: A Midsummer Night’s Dream. Oh, “slowly” existed already as well. So it was available for use. And still is. And I think most of us would agree that it’s probably useful to maintain the slow/slowly distinction in formal writing. But we might want to be a bit less eager to pounce on it and dance all over it with army boots when meeting it in less formal contexts where it’s idiomatic.
When I started talking about conversion, I bet you thought I was going to talk about verbing, didn’t you? Well, I was. I just got off to a slow start. Verbing has been the subject of much vituperation. Many of our most-hated innovations are conversions of nouns to verbs. (“Verbing” is itself such a conversion, of course.) But if we got rid of every verb that was originally converted unmodified from a noun, we’d have to hack out a pretty big part of the English language. If you wish to silence verbing, or elbow it out of the way, or just to distance yourself from it, you could chair a meeting or telephone someone – but you’d have the problem that silence, elbow, distance, chair, and telephone were nouns first. Along with quite a lot of other common verbs, if we want to head in that direction…
But hopefully I won’t need to. Oh, which brings me to sentence adverbs. Frankly, I don’t see what the fuss is about. But, sadly, there is a fuss, so, clearly, I need to address it. This is another area where failure of syntactic analysis has led to rejection by some people. “Hopefully is an adverb meaning ‘with hope,’ so it must apply to the verb!” Well, no. When I say “hopefully I won’t need to,” obviously the needing is not what is hopeful, and when I say “frankly, I don’t see what the fuss is about,” it’s not the seeing that’s frank, and when I say “sadly, there is a fuss, so, clearly, I need to address it,” I’m not saying that the fuss occurs sadly or that the address is what will be clear – although I hope the address is reasonably clear. Seriously, sentence adverbs have been around at least since the 17th century – “seriously” was used as one in 1644. The animus towards them has only been around since the 20th century, and only really caught on in the 1960s, and has been focused mostly on “hopefully.” But when someone goes around saying some point of usage is wrong, isn’t it funny how so many people assume they must be right? But it just ain’t so. Sentence adverbs are a great way to set the tone of a sentence – to clearly and efficiently preset the attitude towards the action described.
But can we use them? Well, of course we can. May we? OK, how many people actually maintain the can/may distinction all the time? Tennyson didn’t. And he was a famous poet. The Oxford Guide to Canadian English Usage notes that “the may/can distinction is a traditional feature of elementary-school education, rather like the use of ‘Sir’ in the military.” (Johnny: “Can I go to the bathroom?” Teacher: “Can? Can?” Johnny: “OK, can I go to the can?”) But we’re not in elementary school now. And among adults, “can” is quite commonly accepted for matters of permission. Which doesn’t mean that “may” isn’t preferred in formal documents. But “may” also has a sense of possibility. Care and attention to the individual circumstance are the best thing for insuring maximum clarity.
There are some other word pairs that are likewise bugbears of a certain class of sticklers. “12 items or less.” “No, no, no! That must be 12 items or fewer! Oh, how stupid these grocery store people are!” But the use of “less” with countable objects actually comes to us from Old English. We have citations of this usage from AD 888. Caxton used it in 1484. The noted scientific journal Nature was using it in the 19th century. I think it very likely that the scientist who wrote “The determination of position in the given manifoldness is reduced to a determination of quantity and to a determination of position in a manifoldness of less dimensions” in 1873 was more, not less, intelligent than most people who sniff at the signs in grocery stores. And if it was good enough for the American Journal of Philology in 1904 (“less flowers”), it really ought not to be treated as a sign of incredibly poor breeding. That doesn’t mean I don’t think the distinction between “less” and “fewer” isn’t worth maintaining. But it’s not as clear-cut as we sometimes like to think it is – some things that can be thought of as countable can also be thought of as mass objects, for instance – and we can certainly tone down the indignation quite a bit.
The lie/lay distinction is another one I think is worth maintaining, by the way. But just so you know, “lay” has been used to mean “lie” since the 14th century and was quite commonly used in that sense in the 17th and 18th centuries. So if it’s an error, it’s at least not a new one!
And then there are sentences like this one. Wait – what’s wrong with that sentence? Some people will leap on it right away, and some people will smile nervously like they know what’s wrong, and… OK, who doesn’t like it? What’s not to like? Sentences like that are very common and are perfectly well understood. It’s not like I’m saying anything abnormal or especially convoluted.
OK, I’ll be fair. I just treated you to two different uses of a given word that are disliked by many. What’s the word? Like. Let me look at them separately. The first is in place of “such as”: many people will insist that “sentences like this one” means “sentences resembling, but not including, this one.” The problem is that nobody would construct a sentence like that to mean that. They would at least add an “other” or something like that to clarify it. “Sentences like this one,” wherever used, is used to mean “sentences of which this one defines a type.” The Oxford Guide to Canadian English Usage argues that there’s a difference between “a friend such as Paul,” wherein Paul merely illustrates the category, and “a friend like Paul,” wherein Paul defines the type of friend. I think that that’s a fair analysis – for most people, using “such as” rather than “like” in these sentences relegates the object to the status of one example among many possible, whereas “like” is used to give it preeminence. And, frankly, this usage is so widespread that resistance to it belongs almost entirely to the “It is I” set. (Oh, I wasn’t even going to get into “It is I.” Even English teachers don’t use that anymore. Don’t tell me that “It is” necessarily takes a subject complement. If it did, most educated people would naturally say “It is I.” You might as well tell a bird it’s not a bird.)
The other disliked use of “like” that I used is where “as though” is preferred. As in “Say it like you mean it.” Now, in this specific case, the use of “like” really is still limited to informal use, to the extent that it can be used as a sort of marker of informality. So our choice is a little clearer on that. But I would be interested in a persuasive argument as to why, other than tradition, “like” couldn’t be used there. In fact, “like” has been used as a conjunction for centuries. There are other uses of it as a conjunction that are, if anything, better than alternatives: “He treated her like he treated the others” can’t be misunderstood like “He treated her as he treated the others,” which could be read as saying the treatments were simultaneous rather than resemblant. In that specific case, you could of course say “He treated her in the way he treated the others” – or, even longer, “…in the way in which he treated the others” – but that is longer. And again: Chaucer, Shakespeare, Marlowe, Addison, Keats – they all used it. It’s true that some usages by earlier authors are not used now. But this one is. So why just decide it’s wrong when it’s so commonly used and understood?
Perhaps I should just focus on usages that are more unique. Well, but “more unique” isn’t – it’s very common, and has been, again, since the early 17th century. We know what the word originally meant, and what it still can mean, but this wouldn’t be the first case of semantic broadening. For instance, since I keep talking about birds, “bird” used to refer only to a small fowl. So some things we call birds now weren’t always birds. “Nice” went from “ignorant” to “foolish” to “overrefined” to “refined” to “good.” Now, it’s a valid question whether we actually want “unique” to lose its uniqueness and mean simply “unusual” or “uncommon.” But we do have to be aware, at the same time, that “unusual” and “uncommon” can differ in tone from “unique” – and that the horse may actually be much too far out of the barn on this one for us to usher it back in.
Well. I could talk till the cows come home… Oh. “Till.” How do you spell it? Apostrophe-t-i-l? How about t-i-l-l? Did you know that the word “until” comes from the Old Frisian “und” meaning “up to” and the preposition “till” meaning “to”? And that that preposition, “till,” t-i-l-l, also came into the language without the “un” and stuck around? “Till,” t-i-l-l, has been a preposition in English for as long as English has been English. The originally somewhat redundant “until” has also been around as long, of course. But when we say “till the cows come home,” that “till” didn’t begin life as a clipping of “until.” Here’s another case where misanalysis has changed the language. Now we have the apostrophe-t-i-l spelling. But we also still have the t-i-l-l spelling. It’s still used, though more often in England, and “until” is preferred at the start of a sentence. Which is not to say apostrophe-t-i-l is unacceptable; it’s become part of the language, since those who use it really do have it in mind as a shortening of “until.” But it’s less formal. Go figure.
I’ll just close with something that has recently gone up on a building across the street from where I live. The building is the Hummingbird Centre. Well, no – now it’s the Sony Centre For The Performing Arts. Did you hear the capital F and T on “For The”? On every bit of signage. We all sit around with out capitalization rules, sometimes debating but always eagerly enforcing, and meanwhile much of the world goes on ignoring us. Now, many of you will know that I don’t favour capitalizing words just because they seem important. I don’t think it adds a thing to the English language – in fact, it makes something fuzzy that could and should be clear. But this case isn’t that. This is title capitalization. All the words are capped not for emphasis but because it’s a title. Now, we know that you’re not supposed to do that. But a lot of people never quite got that sorted out in school. And Microsoft Word doesn’t know it at all! And where we once would leave out articles and prepositions in acronyms, the practice is increasingly to put them in. What’s happening this coming Sunday in Queen’s Park? Word on the Street. What’s the initialism used for it? WS? No, WOTS. Meanwhile, when people do lower-case some words in title case, one of the words they very often lower-case is “is.” Because it’s such an unstressed function word. It’s less prominent in many sentences than the prepositions are. Some people think it is a preposition! I kid you not! Now, that tells us about the poor level of linguistic understanding many people leave school with. But let’s get back to the question of what words in a title should be capitalized. Actually, I just want to end with this question – and perhaps use it to start off the discussion period. I don’t think this is a question we can ignore. I see this shift happening. In fact, it might happen no matter what we do. So what is the value of leaving those little function words uncapped in titles? What does it add to the language? What would the language lose if they were capped?
Oh, it ain’t necessarily so…
What is and isn’t necessarily so?
When you make a revision,
you can make a decision
on what can stay and what can go.
…So you might as well be in the know.
Twenty years ago, the year after which I retired,I taught a class on Machiavelli, and, during the entire semester, was unable to purge “alot” from class papers. It’s firm in the language, and now I don’t try to lead my grandchildren to what I still consider to be a better choice. Thanks a lot.
But at least “a lot” isn’t wrong. And actually I don’t see “alot” as much as I used to — maybe I’m just not looking. I do see “a lot” a lot. (I definitely prefer “a lot.” And one is allowed to exert such influence as one has on language for purely aesthetic reasons!)
Pingback: Whoever tells you to always avoid splitting infinitives is wrong « Sesquiotica
Pingback: A new way to be a complete loser « Sesquiotica
Pingback: Streamkeepers of the language « Sesquiotica
Pingback: For anyone who hadn’t noticed… « Sesquiotica
Pingback: they | Sesquiotica
Pingback: Two spaces and authority | Sesquiotica
Pingback: rile | Sesquiotica
A nice, punchy list of some non-errors, including several I didn’t address: http://public.wsu.edu/~brians/errors/nonerrors.html
Pingback: Why it’s best to leave grammar advice to experts | Sesquiotica
In OE it was “til” and that spelling is still found. From The Century Dictionary and Cyclopedia:
til
A simplified spelling of till.
An old spelling of till.
Pingback: Denali | Sesquiotica
Pingback: Sentence fragments? If you like. | Sesquiotica