Conversations about literacy/illiteracy from South Africa that I followed on social media in November 2017 featured a thread where people argued about whether South Africa, Zimbabwe, Lesotho or Ethiopia were the most literate countries, the first three scoring in the 90s as their percentage of literates, as recorded in the annual United Nations literacy tables.
Then, in December 2017 the focus of conversation shifted as disaster appeared to strike in the form of the release of the latest PIRLS (Progress in Reading Literacy Study). PIRLS tested a selection of Grade 4 children across the country in 2016 and compared their results to 50 other countries where the test was also run. The one glaring claim that kept getting repeated and endorsed in the conversations online was, as PEN South Africa tweeted, “78% of Grade 4s in SA cannot read for meaning”. Journalists, business-people, politicians and academics joined in the (briefly voiced) widespread lament: “This is heartbreaking. A culture of literacy is essential for our children to succeed.”
The Mail and Guardian’s (December 2017) end of year Cabinet Report Card gave the national Minister of Basic Education a D rating, largely because of the PIRLS results, adding: “This is shocking…”. The Sunday Times (5 December 2017) and eNCA radio news (5 December, 2017) ran the same story and the Daily Maverick (6 December 2017) under the heading ‘Educational Shocker’, added, “the students in question failed to meet the lowest literacy benchmark of the study: retrieving basic information from texts to answer simple questions. To put this into global perspective, only 4% of students internationally were unable to reach this benchmark, as opposed to South Africa’s 78%”.
But there are problems with these tests and what they can tell us. The tests work with greatly limited constructs of both literacy and language and are neither sensitive enough nor grounded enough in actual classroom literacy practices to be of any use beyond pointing to what we all already know in broad outline – schooling is a problem in South Africa, characterised by systemic inequalities, widespread inefficiencies and by national curricula that are blunt instruments, responding to the differences and particularities that characterizes schooling in South Africa as if there were no differences.
How literacy is construed and understood determines what is taught and measured. How these are done, including what definition of literacy is assumed and what is valued in literacy activity, shape the answers to any questions about literacy with regard to South Africans, adults and children.
That is why we have such widely diverging claims about literacy levels (well above 90% for the general population in UN data, then 58% of children being unable to “read for meaning” in PIRLS 2011 and then 78% in 2016). At the same time, research, particularly from educational economists at Stellenbosch, has been showing us for at least a decade already that around 20% of South African children do very well in a whole range of tests in literacy, language and mathematics and 80% do very badly indeed, and that this split coincides with larger social divisions between socio-economic status of households, between elite and sub-elite schooling, and between children who attend schools as first language English-speakers and those who attend as second-language English speakers in a context where performance in Standard English is taken as normative.
It is thus not that we did not know about the serious problems within educational provision in South Africa, along with their links to the pervasive social inequalities in the country. So why the shock now about the PIRLS tests? And why does the idea of literacy continue to work as such an emotional red flag when waved in public and academic discourse?
The PIRLS data is based on a test devised in Boston, US where children read two passages and then answer questions on them. The tests are exercises designed to focus on the so-called comprehension skills of retrieval, inference, interpretation and evaluation. The tests are based on the assumption that these skills are context-free, individually-based but uniform and universal mental processing activities which can be reliably tested and compared across widely diverging socio-economic, -cultural and -linguistic contexts, and which can provide a reliable basis for drawing conclusions about students’ ability to “read for meaning”.
Implicit in its design is a construct of language as a neutral and transparent conduit in a mentalist coding, decoding or translational model of language-based communication and of the reader as a simple social subject who is either competent or incompetent at coding and decoding skills and at meaning-taking and -making. Literacy is a simplified and compacted construct in such exercises, streamlined for administration and for measurement. But meaning is not contained and coded simply into the graphic marks which can be coded one by one to produce meaning.
Meaning is also coded into the genres of writing, the materials used, the various other representational resources that have particular social meaning, and the wider social context that shapes particular kinds of textual production. Meaning is always a process of co-construction between reader and text, where meaning taken is at least partly shaped by the meanings that readers brings with them, influenced by their background, their prior experiences and their interests.
Where the meanings made and taken seem to be of an obvious and inevitable kind, that is because the rules of engagement are already thoroughly rehearsed and embedded in that particular setting. So, while purporting to test children’s individual literacy skills, the PIRLS and its like are more tests of whether the children’s experiences of schooling match the unexamined or unstated assumptions of the tester as to how schooling is done, or should be done.
In contrast, we need to understand the ways in which schools in specific social spaces organise themselves through particular ways of relating, where literacy teaching and learning happen as instances of the workings of these settings. For example, what is the nature of teacher-pupil dialogue in the classroom in relation to literacy? What gets tested as literacy in such tests is more that of a skill in a particular kind of verbal explanation, learnt mostly through classroom spoken interaction. It is not literacy that fosters this skill but rather other aspects of particular kinds of schooling such as those where teachers ask questions like “what made you give that answer? How do you know?” and then get students to elaborate on their views.
Schools of certain kinds develop students’ abilities and habits in answering questions of a general sort, often in relation to the world outside of school that has mostly not been encountered by students except as subject matter in classroom discourse. These strategies are learnt within particular systems of activity. They are generally not learnt nor practised in school settings where students (and sometimes teachers) do not already have fluent or emergent access to the standard language resources that are required for these kinds of exchanges.
In such cases, which include the mass of South African schools, classroom dialogue is focused more commonly on the surface features of language and literacy coding and decoding. What counts as literacy and “meaning-making”, then, is not a generalised competence (eg being able to speak English or “code and decode letters” or “make meaning”) but a situated, communicative competence embedded in acquired cultural knowledge and learnt models of using situated language in specific ways, drawing on varying histories and different rules for socially interacting, for sharing knowledge and opinions, and for reading and writing.
The language of the PIRLS tests is a problem. The PIRLS passages used in South African tests are drafted in Boston and adapted to regional English varieties elsewhere or translated into other standard national languages, on the mistaken assumption that every child speaks a standard language which reflects their ethnolinguistic identity, place of origin or current location.
The South African implementers translate the passages from US into UK Standard English and then into the remaining 10 South African standard Languages. The assumption that is made is that South African children will have most ease in reading and responding to these passages in one standard South African language amongst the 11 so-called national languages which is identified as being the “mother tongue” of each child and that the translated passages in the other standard South African language version are equivalent to, or carry a commensurate comparability with the English original.
Among other problems with this procedure is the notion that students are at ease reading in the standard language identified as their “mother tongue” and that such “mother tongues” are unified and homogenous resources that are carried by individuals.
In Khayelitsha, in Cape Town, where isiXhosa has dominance as the denotational code recognisably closest to how most people are speaking, a teacher noted the variability of the actual “home language” of both students and teachers, as “mixed with ilanguage yamacoloured, amaXhosa and the white”. The dynamic local languaging of people in such settings spills over, bypassing the standard to absorb diversity and unpredictability, in a frame of language as socially practised rather than as a systemic resource with autonomous structures that consists of a core and of lesser-status dialect offshoots. Languaging practices here are shaped by people and things that are carried in and out of these spaces and are assembled in situ to form languaging resources that are both diverse and unpredictable and opaque to outsiders. The same dynamic applies at a more intensified level in other urban contexts where the varieties of linguistic backgrounds and influences are even more diverse.
The administrators of the South African PIRLS tests will not let researchers examine the original nor the translated test passages used across the designated 11 South African languages, on the grounds that the tests and the text pieces that they used have to be kept confidential in case there is a reason to use them again for testing purposes. As a result, the widely publicised claim that 78% of South African students can’t “read for meaning” is a research claim that cannot be tested, despite strong reservations that the translated passages might be problematic as instruments for testing in South African multilingual contexts. (That alone should be a problem for the validity of the claims.)
The two examples of text passages that the PIRLS centre in Boston gives for the 2016 tests are a narrative passage about a father who bakes an “enemy pie” to teach his son how to make friends with another boy he regards as an enemy; and an “informative text” about the study of fossils that first led to the concept of dinosaurs and their presence on earth long ago. How does a story about a father baking a pie to teach his son about making friends and a discussion of dinosaurs and fossils get translated to South African contexts and language? How many Grade 4 students would follow a discussion which uses Standard isiXhosa, isiZulu or one of the other “official” languages identified as their mother tongue to discuss fossils and dinosaurs? What words would be used for these and how many children would recognise them? Just how far or close are these passages to children’s actual language use? What alternatives did the testers in Pretoria devise and how were they applicable to multiple, diverse settings around the country?
The emphasis on coding and decoding of letters, words and sentences in a standard language which is thought to count as literacy by many educators, policy-makers and testers is a reification that perpetuates unequal outcomes. To the extent that the formal scheme of literacy instruction and testing makes no allowance for the complex local practices upon which it is parasitic, it fails both the intended beneficiaries and its designers. DM
A draft of the longer paper upon which this piece is based, along with references used, can be viewed here
Mastin Prinsloo, Professor School of Education, University of Cape Town
Photo: Teacher Reginald Sikhwari poses for a picture with his class of grade 11 students at Sekano-Ntoane school in Soweto, South Africa, September 17, 2015. REUTERS/Siphiwe Sibeko
In the final two years of his life Van Gogh averaged about three paintings per week.