To read or not to read, is that the question?

Every so often there’s a little bit of chatter on Twitter about reading, such as, can everyone do it? Before you can even consider this question, surely you have to define ‘reading’?

I think it’s a bit of a useless exercise to define ‘reading’ without considering what constitutes ‘communication’. Communication can be seen as the transmission of information, whether verbal or non-verbal. Beyond this explanation, there is no general agreement as to a whole host of other issues on this subject. There have been swathes of papers written on variations of this subject, many of which may be quite controversial. Are humans the only species to have symbol awareness? Do you need sound/symbol awareness to communicate? Can language be acquired without linguistic input? Is language inherited? Is it species-specific to humans?

My first ever piece of writing at Master’s level was a critical evaluation of Goldin-Meadow’s (2007) paper ‘The challenge: Some properties of language can be learned without linguistic input’. She argues that children have a cognitive bias towards language development, based on her research with hearing-impaired children. I’m not sure it’s a ‘done’ thing to publish university assignments on the internet, even when it’s one’s own work, so I shall just copy and paste some extracts in italics – simply because I can’t be bothered to rewrite parts I want to quote.

Language is part of a system of communication that utilises arbitrary symbols representing objects or events; the organisation of these symbols to convey meaning; and the systematic arrangement of elements within the language that gives a structural dependency. The study of human language is divided down into different topic areas; the meaning in language (semantics); the system of sounds used (phonology); the arrangements of elements within sentences (grammar); and the appreciation of how to use language to get things done (pragmatics). Grammar covers both syntax, the way that words are put together to make coherent sentences and morphology, which is the underlying structure of words and their units of meaning (morphemes). Morphemes can be combined so as to change the meaning of the base form, for example, to change tense or move from the singular to the plural.

Hulme and Snowling (2009) discuss two opposing views regarding language acquisition. They state that for many years the dominant view was that language learning was dependent on innate linguistic structures which allowed for specialised mechanisms that pave the way for the abstraction of grammatical ‘rules’. The alternative view is that linguistic input is critical for language learning, dependent on abstracting regularities gradually over time.

What happens though, if a child is deaf? How does that fit in with the latter view? Many speech and language therapists and teachers believe that language is critical for reading. There is a line of thought that ‘slow readers’ are slow because they come from language impoverished environments and/or are never read to. I’ve always had a bit of a problem with this reasoning, since such families simultaneously stand accused of using the TV as a full-time babysitter. I can only assume that critics are suggesting that the volume control is broken so the sound is never on?

Iverson and Goldin-Meadow (2005), studied the role of children’s early gestures in language acquisition, concludes that gestures precede speech and are tightly related to language development, and that gestures provide a communication tool for children who cannot express themselves verbally. Iverson and Goldin-Meadow considers that gesture may also be employed as a method of forming a visual representation of a task that a child is on the cusp of mastering, when a child may not yet have the ability to make verbal explanations. Children were observed creating a patient-act-actor systemised way of gesturing, rather than simply miming an action. This form of segmentation and combination was viewed as having the hallmarks of a linguistic system and additionally, these types of gestures were not used by hearing people. Goldin-Meadow ascertains that this is proof of a cognitive bias, since the children could not utilise linguistic input to create language-like communication.

The interaction between the environment and innate ability for communication is something better discussed in an article by Senghas and Coppola (2001). They consider that in an impoverished environment, innate characteristics become more evident. By looking at the two cohorts of the deaf school, Senghas and Coppola aimed to establish whether it is through innate ability in children that a functional language evolves, or through the more mature cognitive abilities of adults. As there was no previous sign-language available to the learners, the learners converged on a rudimentary sign-language that incorporated their individual gesture systems. The study showed that it was multiple cohorts that adapted and stabilised the sign-language. While the first cohort systemised their resources, it was the second cohort children that built the grammar by increasing spatial modulations, thus improving shared reference and specificity. This could be evidence of children ‘helping’ adults by ensuring that a sentence may have fewer possible meanings, therefore working on aspects of semantics and pragmatics, as well as morphology.

Essentially, deaf children who had no previous instruction in sign language, quickly developed their own through pooling resources, which was then rapidly adapted by incoming children to evolve such features as past, present and future. As these features are required to meet the definition of ‘language’, sign language therefore should count as a language in its own right (a controversial view in some arenas).

Much of the above can be applied to the written word and reading. What are words if they are not a collection of symbols whose order determines their meaning? Pushing the argument a bit further out, what makes the difference between say, a pictorial language such as Chinese and the pictorial language of PECS (Picture Exchange Communication System)? If you can ‘read’ Chinese, can you not be considered to be a reader if you communicate through PECS? I cannot profess to be fluent in PECS as it has never formed part of my teaching, however, if children can spontaneously develop grammar in sign language, I can only assume that grammar would also have a role in PECS (adaptation though may be linked to cognitive ability) even if it is what forms as part of a follow-on. So, when presented with the question ‘can everyone learn to read’, I would say, pretty much everyone. If I was asked ‘can everyone learn to read an alphabetic language such as English’, then the answer would have to be less so.

But is reading simply the ability to decode text? I can read English fluently but perhaps most importantly, I can understand English fluently. If asked ‘can everyone learn to read’, do you actually mean ‘read and understand’, since the two are not mutually inclusive? I can read French fluently, but I understand it far less than I understand English. I can also read Latin fluently, but I understand it even less than I do French. If you explain the sounds that accompany the dots and wiggles in Norwegian, I can probably read that fluently too, but understand none. I would be doing nothing more than ‘barking at print’. Likewise, individuals (usually those with ASD) who are hyperlexic demonstrate a precocious ability to read which is not matched by an ability to understand. They may be able to define individual words but not explain the meaning of the sentence. One working definition of hyperlexia is an above average ability to read with a below average ability to comprehend. I personally disagree with this definition, I prefer the discrepancy model of 1.5 to 2 standard deviations between each score, rather than from the norm. It makes more sense to me since an early ability to read does not indicate exceptional ability to do so. I have to admit to bias here as I’ve seen precocious reading ability first hand and also in the absence of speech. Similarly, I know many autistic people who sing songs word perfectly, but only have functional speech at the 2-3 word level. I would therefore argue that to read must also include the ability to understand. This then has to include reading PECS if the reader is demonstrating understanding.

So, in short, if you’re going to ask me if everyone has the ability to read, you may get a longer response than you’re hoping for!

EDIT: I’ve always quite liked this on the subject of communication – although it is VERY long


Goldin-Meadow, S. (2007) ‘The challenge: Some properties of language can be learned without linguistic input’ in Linguistic Review, Vol.24, Issue 4, p417-421 [online] (accessed 24 November 2011)

Hulme, C. and Snowling, M. J. (2009) ‘Specific Language Impairment’ in Developmental Disorders of Language Learning and Cognition West Sussex: Wiley-Blackwell

Iverson, J. M. and Goldin-Meadow, S. (2005) ‘Gesture Paves the Way for Language Development’ in Psychological Science Vol.16, Issue 5, p367-371 [online] (accessed 24 November 2011)

Senghas, A and Coppola, M. (2001) ‘Children creating Language: How Nicaraguan Sign Language Acquired a Spatial Grammar’ in Psychological Science, Vol. 12, Issue 4, [online] (accessed 24 November 2011)


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s