PAPUA NEW GUINEA - BUAI DIGITAL INFORMATION PROJECT
Community Teacher Education in Papua New Guinea
Chapter 7 - Improving Teacher Education: .i. Political Intrigues
COMMENTARY ON CHAPTER 7
This chapter documents a sad and hurtful episode in PNG teacher education.  It needs documentation so that those who are responsible for educational policies should be alert to the dangers of ignoring the needs of those they are supposed to serve.
Improving the quality of teachers appears to be pivotal in enhancing the quality of education in a developing country.  This was the rationale for a Basic Skills program and examination.
The evidence indicated that student teachers possessed a tenuous grasp of English and Mathematics, and a Basic Skills program was initiated to address the problem.  The associated examinations were neither valid nor reliable and they caused severe disruptions in college life.  Though accumulated evidence revealed that the examinations promoted rote learning and teaching best described as "spoon - feeding", the Division of Teacher Education failed for a lengthy period of time to respond to the detrimental effects these examinations were having on the education of student teachers.  After some time it appeared that the program was, in reality, a vehicle aiming to emasculate the independence of the church agency teachers' colleges.  Research sadly concluded that the that after 150 hours of instruction, students scored significantly lower than they did on entry to the teachers' colleges.  The Basic Skills program had a deleterious effect on the students' learning.
Chapter 7 - Improving Teacher Education: .i.Political Intrigues
ENGLISH: A HURDLE FOR LEARNING
As has been discussed in the previous chapter, teaching and learning in English is a major problem in educating students in Papua New Guinea.  It is official government policy that the language of instruction in all PNG schools and colleges is English (Department of Education, 1976, p.215), even though "one of the greatest frustrations to efficient learning and understanding is the presentation of complex material in a language which is not the mother tongue of the students" (Lewis, 1974, p.63).  Moreover, Kenehe, as chairman of a nationwide inquiry into educational standards and their perceived decline, concluded:
ADDRESSING THE PROBLEM
A direct way of addressing the problem is to develop competency in the English language of future teachers.  Such a perspective was adopted by the Principals of the eight preservice teachers' colleges at their annual conference in 1979.  Concern was expressed that despite the Principals adhering to careful selection procedures, student teachers accepted into colleges had unsatisfactory competencies both in English and Mathematics.  Indeed they were so low that the "task of training them to be capable teachers for Community Schools appeared an almost impossible one" (Wingfield, 1987, p.3).  Consequently, they proposed to develop minimum performance standards in these two subjects, as well as test students on entry in order to assess their achievement of minimum standards.  For those below standard, a one semester intensive basic skills course including mainly remedial work, was proposed.  At the end of this program students would be required to sit for an examination.  For those who failed to achieve minimum performance standards, their government scholarship and training would be terminated.  In order to maintain comparable standards over the eight colleges, the examinations were to be administered by the staff of the Teacher Education Division.
This was the rationale for the national Basic Skills program.  What developed was not congruent with the vision articulated by the Principals.  Indeed, the Basic Skills course and its examination promoted an unparalleled degree of animosity, conflict and resistance between the independent church colleges and the Ministry of Education's, Teacher Education Division.
THE FIRST STUDY
The issue of Basic Skills in particular, along with others was researched as part of a major research project for the Ministry of Education: Teacher Education Research Project: 1987 - 1988 (McLaughlin, 1988).  Fifty three lecturers and administrators were interviewed.  They were representatives of staff in all eight of the preservice teachers colleges and officers of the Teacher Education Division.
The major techniques used in the research were:
Documents reviewed included college course outlines, handbooks, minutes from Academic Advisory Committees and Governing Councils, correspondence, basic skills test instruments, analysis of data presented by college staff and other research documents concerning teacher education in PNG.
These were loosely structured and enabled participants to voice their concerns within the general inquiry questions.  As a means of ensuring validity, draft copies of the final report were sent to all colleges in order for respondents to comment upon the authenticity of interpretation.  [Full details concerning the research design are recorded in McLaughlin (1988)].  Their replies were incorporated into the final draft.  Principals at their combined conference in 1988 unanimously moved that the final report be published.  (Officers from the Teacher Education Division believed it contained bias and opposed its publication).  It was published.
It should be noted that the Government has direct control over one college.  The rest are administered by church agencies.  This means that church agencies nominate their own principals and control the selection of staff and students.  Although there is a national curriculum, colleges supposedly have independence in its interpretation and implementation.  The government contributes to almost the full running cost of the colleges.  The agencies have provided multi million kina college facilities.  Except at the government college, expatriate staff (missionary and volunteers) are on local national wages.  Overseas contract officers are on wages three or more times the wage of their national counterparts.
CONCEPTUAL BASIS OF THE BASIC SKILLS PROGRAM
It was very appropriate to explore the Basic Skills program, as this issue generated considerable and indeed vehement comment among staff over all the teachers colleges.  All staff interviewed voiced an opinion for the need for a basic skills program:
However, a large majority of experienced lecturers and administrators had very strong doubts about the validity of the conceptual basis for the Basic Skills program.  Repeatedly, it was voiced that the emphasis on skills to the neglect of the language context was to ignore current theory in language development (Bell, 1981; Lynch, 1980; Stern, 1983).  Most of those involved over the years in the Basic Skills program expressed doubts about the value of the program's effectiveness.
A number of English studies lecturers expressed the view that for a Basic Skills program to be effective, it must be based in a meaningful language context, and not on isolated skills exercises; that it should be integrated, incorporating a PNG literature and language component and that it should be extended to 12 months or more.  It was a very common view, though not unanimous, that the students did not achieve the overall objective of the program, despite what test results indicated.  When asked what benefits the students derived from the program, an experienced Principal responded:
Most lecturers involved in the English Basic Skills expressed similar or more guarded opinions, though a few suggested that improvement had been noted, though not as much as would have been hoped for, given the energy expended on it.
From another perspective, officers from the Division suggested that it was an over expectation to hope that students:
The research literature (e.g.  Stern, 1983, pp.  497-521) suggests that this retort is simplistic.  The ability to use language (including writing) occurs in a context of an array of variables (Stern, 1983,p.500).  Lynch (1980), in his explanation of the difficulties that young students have in learning in English at the University of Papua New Guinea (UPNG), noted that any effort to improve students' facility in English is superficial if it concentrates on the conventions of language (e.g.  skills), while neglecting the conceptual basis of that language.  An analogy might assist in clarifying the problem.  A child may have numerous sores all over its body.  One way to treat the child is to apply an antibiotic cream to the affected areas.  (Remediation to diagnosed weaknesses).  Some short term improvement may occur.  However, if the child is suffering from malnutrition which is the real cause of the sores, medication will not solve the problem.
Another point made by a Division representative for consideration, was that some lecturers were possibly against the Basic Skills examinations because they might interpret that failed students represented a slur on their own teaching ability.  This comment prompted an objection from a Principal of a teachers' college.  He argued that this observation implied that the Division had little faith in the professionalism of its officers, an implication he rejected (Simpson, 1988).
SOME NEGATIVE EFFECTS OF THE BASIC SKILLS PROGRAM
College personnel had identified a number of perceived negative influences on students, as a result of the program.
Disruption of College Life
Many complained that the entire college program for the first six months had been disrupted.  Students were inclined to be less diligent with other subjects, knowing that a failure in Maths or English would result in termination.  An emotive response was:
The sentiments in this opinion were commonly held by the staff.  Officers from the Division did not share this perception that the Basic Skills program must interrupt or disrupt college life.  Lecturers should only have to spend the allocated time (100 hours in English; 60 hours in Maths) doing the course, not more.  However, such a response ignored the reality of the pressure that the examinations make on staff and students.  These sentiments have been reported elsewhere (Matane, 1986; Wingfield, 1987; Yeoman, 1988).
It was stated by a Division Administrator that the program had existed for some time and the only difference was that external examinations were used to gauge the students' standards of attainment.  Indeed, this was the very point of contention.  The majority of college personnel did not believe that the examinations do this, while Division's officers did.
College staff agreed with the first statement, but strongly disagreed, that what the department had done was the means to achieve that standard.  In fact, data from Yeoman (1988) suggested that the definition of "standards" in terms of articulated objectives lacked clarity and that the instruments to monitor standards were so poorly constructed as to be both invalid and unreliable.
Three colleges had staff officially doing extra coaching to small groups in non-lecture times, in order to pass the examination.  Staff freely admitted that the coaching aimed at the students passing and this aim became paramount.  At times the primary objective of competency, it was said, tended to fade into the background:
Akin to this was the very common opinion that rote learning was occurring.  Despite all the educational rhetoric to the contrary, most staff admitted that reliance on short term memory played a big part in passing the examination.  Despite official reproaches that such was an indictment on lecturers' professionalism /competency, (Memorandum, 1987), contextual circumstances demanded such processes, if students were to pass the examination (Marton, Hounsell & Entwistle, 1984, pp.  144-164).
Much comment was made concerning perceptions of validity and reliability.  Many lecturers expressed doubt about test validity; does the test actually measure basic skills attainment and the extent of transfer of those skills after the examination?:
A Principal in commenting on the issue of test validity noted, that he/she as well as staff members could see little evidence of improved written ability in other subject areas.  Moreover, two Principals revealed evidence showing that there was little correlation between a pass in English Basic Skills and credit and distinction ratings in the grade ten examination.  A variety of explanations may be offered but a senior English lecturer suggested this interpretation of why some bright students might fail:
Such descriptive data have been confirmed by quantitative evidence.  Yeoman (1988, p.27-44) has thoroughly analysed the examinations and concluded that they displayed an appalling absence of principles relating to test construction.  They simply lacked validity.
Reliability is an indication of the consistency of a measurement; that there is no dramatic change in measurement over time.  One lecturer recalled an experiment he did in 1986.  He collected items from the English Basic Skills examination and gave them to his class in October in 1986.  Later, he repeated the same process with the same class in 1987.  Twenty four of the twenty eight scored less than the 80% pass mark.  He expressed his disillusionment with the whole process:
Statistical data (Ross 1989, p.22) have confirmed that there was a lack of retention of the basic skills.  A sample of beginning teachers was asked to sit for the same version of the Basic Skills test that they had taken in college as part of the National Examination.  The results are recorded in Table 1.
Table 1: Changes in Basic Skills Performance of 1985 & 1986 Graduates
Basic Skills Retest English Mathematics 1985 Graduates (N = 35) +3.5% -22.7% 1986 Graduates (N = 32) +4.12% -27.2%
Slight changes in English performance were not significant, although one would perhaps expect a more marked improvement in skills which were supposed to be basic, and are presumably practised daily in the classroom.  The decline in mathematics was significant and more serious.
Yeoman (1988, p.44) reported that reading passages in the 1987 examination had never been trialled for readability.  Indeed lecturers using the Fry Index of Readability compared pre and post test passages and found a nine grade difference.  Lecturers from other colleges expressed similar concerns.  They criticised items on the examination and what they perceived to be a rigid interpretation of what was considered correct in their marking.  This same observation was made by Ross (1989, p.22):
There was considerable concern expressed about the perceived artificiality of examinations to assess English competency:
This was further explored and lecturers and administrators were able to document cases when in their opinion, competent students had their scholarships withdrawn, because they scored between 79.0 and 79.9, while others who were less able (in their opinion) scored above 80%.  A standard error of measurement had never been applied.  Certainly there was a general belief that an injustice had been done to some terminated students.  Similar documentation was recorded by Yeoman (1988, p.44).
Division personnel believed it was "curious" that college staff responded so negatively about the Basic Skills, but failed to do so at Basic Skills workshops.  Indeed it was noted that lecturers were enthusiastic participants in the design of examinations at workshops.
But the facts were, that the Division had received considerable feedback about the doubtful educational value of the examinations.  One college (Kaindi) had its entire staff sign a letter to the Division expressing "no confidence" in the process.  Indeed Yeoman (1988, p.13) commented:
Many of the criticisms made to McLaughlin (1988) were also recorded in Wingfield (1986), Matane (1986), Yeoman (1988) and Ross (1989).
The reasons why lecturers participated enthusiastically in workshops can only be speculative.  But a reasoned hypothesis is this.  The workshops were organised or conducted or monitored by Division personnel whose responsibility was not only curriculum but inspections of staff.  Could it be that if staff perceived that if they pursued their opposition that any ill feelings so generated with Division offices might negatively influence inspection reports which determine eligibility for promotion? The fact was that there was widespread opposition to the Basic Skills course and its examinations.  The Principals at their 1989 Combined Conference unanimously voted for its abolition.
Such criticism was unwarranted, according to a Division officer:
Such a comment is an example of what Yeoman (1988) had identified as a lack of an appreciation of the rigour which needs to adopted in test construction.  Few lecturers were qualified in measurement and test construction.  This had been formally acknowledged in an AIDAB report for staff development for lecturers (AIDAB, 1989).  It was ironic that while assistance was sought from the non qualified, complaint was made about the quality of that assistance.  It seemed that much of the workshop activity in examination construction was pooled ignorance, with the following result:
There was a general and strong belief among lecturers involved in Basic Skills and all college administrators that this program was not achieving its goal.  Standards were neither being identified nor monitored: "It appears that the exam `tail' is wagging the course `dog'" (Yeoman, 1988, p.74).  From an educational perspective it appeared that the friction between the various parties had its genesis in the perceived way the program was conceptualised (a remedial program) and the way it had developed (an accountability mechanism).  The structure lacked any educational theory basis.  Its implicit theoretical basis was challenged by most.  Since it has not been articulated, personalities clashed rather than ideas.
THE 'POLITICAL DIMENSION'
If this program had generated so much reasoned and widespread opposition from an educational perspective, why had there not been any major changes? The answer to this question was found within a broader context.  Except for Madang Teachers' College, the seven other colleges are conducted by the various Christian mission agencies.  They have considerable independence in the conducting of their colleges.  This independence was what the Teacher Education Division wanted to emasculate.  This hypothesis was offered in a submission to a national task force exploring future directions of community teacher education in Papua New Guinea (McNamara, 1989).  In its final report the following extract was quoted as part of a rationale for the need of an autonomous Institute of Teacher Education:
McLaughlin (Submission 17) states that 'to put it simply, it is an issue of power.
The opinion in this submission was formed because it became very evident that the Teacher Education Division was prepared only to negotiate a change in the Basic Skills examinations if a series of national examinations were to replace it.  In a Report of the Task Force on the Philosophy of Education: Ministerial Committee Report (Markis, 1987) it was stated on three occasions that a national examination in Teachers' Colleges would be a solution to identified problems.  Each time, the stated problem included reference to the independence of the teachers' colleges as a source of the problem (cf.  p.  10; Recommendations 5; and 21).  It was stated that:
Yet the research evidence strongly contradicted this opinion.  Avalos' (1989) study of second year students from all colleges reported that there was a remarkable similarity in the teaching styles of all students irrespective of the college in which they trained.  Students emphasised a standardised pattern of teaching independent of pupils' learning needs.  There was a reliance on structure rather than on meaningful communication.
Again it was ironic that the beginning teachers placed "in Church agency schools had significantly better performance on TPI (Teaching Performance Indicator) than those in government agency schools ...  .  This was the most significant effect in the multiple regression equation" (Ross, 1989, p.18).  The overwhelming majority of beginning teachers in Church Agency schools came from Church Agency teachers' colleges (Ross, 1988, p.62).  The assertion that the independence of teachers' colleges was a hindrance to the maintenance and improvement of education standards could not be sustained by the evidence.  Moreover, the Task Force Report on the Future of Community School Teacher Education (McNamara, 1989) carefully examined the whole Basic Skills issue.  It rejected the notion that uniformity was the direction to be taken in order to promote quality teacher education.  Rather it opted for a National Institute of Teacher Education, which would decentralise decision making in order:
The Task Force listed recommendations for the collaborative development of staff, of curriculum, its moderation and the monitoring of standards.  It is only when these recommendations are accepted by all that the destructive tension about power can be dissipated and energies released towards a participative effort to enhance quality in teacher education.
The Teacher Education Division dismissed the notion that their insistence to the maintenance of external examinations was politically motivated.  They insisted that Basic Skills was essentially about the preservation of academic standards.  Consequently, a research project was mounted to explore how helpful the Basic Skills course was in promoting academic rigour (McLaughlin & McLaughlin, 1995)
THE SECOND STUDY
The research aimed to investigate the English reading development of a group of first year preservice Teacher Education stu-dents enrolled at Community School Teachers' Colleges, who were participating in the 100 hours Basic Skills program.  To do this, the purpose was to initially measure the reading abilities of a large sample of students on entry into the teachers' colleges.  Hence, the first research question was:
Therefore, the second research question was:
The research adopted a quasi experimental design (Cook & Campbell, 1979), one in which it had not been possible to randomly assign subjects to experimental groups since they were already in discrete and intact groups: "Such research can make valuable contributions, but it is important that the researcher be especially cautious about interpreting and generalising results" (Wiersma, 1986, p.139).
The research employed the Pretest-Posttest, nonequivalent multiple-group design (Wiersma, 1986).  Although three different Teachers' Colleges were used in the research, it was not possible to include a control group since all beginning first year student teachers in CSTC were engaged in the 100 contact hours of the English Basic Skills program for the first semester of their teacher education program.
A total of two hundred and fifty (250) students from three different colleges became the sample.  (There are approximately six hundred first year students in all colleges).  The sample included students from all but one of Papua New Guinea's twenty provinces and comprised 134 males and 116 females.
A cloze test was used to investigate the reading ability of year one student teachers as identified in the sample.  The cloze procedure is defined as the use of a piece of writing in which certain words have been deleted and the pupil has to make maximum possible use of the con-text available in predicting the missing words (Bullock Re-port, 1975, p.93).  This assesses students' knowledge of many components of the target language in a context of meaningful discourse.  The students are guided by syntactic, morphological and semantic clues (Rivers, 1981).  Students are expected to read the text carefully, filling in all the omitted words according to their projections of the evolving meaning of the text.
Research has demonstrated that the cloze procedure correlates well with tests of global language pro-ficiency and that they are as good a predictor of general language competency as standardised tests (Oller, 1976).  The literature (Rye, 1982) indicates that the cloze proce-dure is particularly helpful in assessing the learning of English as a second language: be-cause all languages have grammatical and semanti-cal constraints inherent in them, the usefulness of Cloze Procedure is not restricted to a particular language.  There is a growing body of literature which indicates the usefulness of cloze procedure in the Teaching of English as a Second Language (Rye, 1982, p.94).
Mean 20.58 (max.
Table 2 reveals that the majority of the first year students in CSTC could read at the instructional or frustrational levels.  Such data may be anticipated, since most students who choose community teaching were unsuccessful applicants to national high school.
The post test was sent to the colleges three weeks after the National Basic Skills examination.  The test was administered in all colleges in the same week.
Mean 18.49 (max.
Table 3 indicates that only 3 students (1.2%) read the passage at the independent level.  This was a decline by seven students.  In the same test 108 students (43.2%) read the passage at the instructional level representing a decline by 32 students.  Furthermore, the post test results indicated that 139 students (55.6%) read the passage at the frustrational level.  This was an increase of 39 students when compared with the students' performance in the pre test.  The difference between the frequencies was highly significant (p<0.01), as was the difference between the pre- and post-test means of 2.09 (p<0.001).  A comparison is shown in Table 4.
Mean difference between Pre & Post Test = +2.09
It was alarming to note the after the 100 hours of intensive EBS program, the students' performance not only did not improve but unexpectedly declined. Such a significant regression in the students' ability to read the same passage after a 100 contact hours of an English Basic Skills program invited questions about the purpose and the effectiveness of the program and its effectiveness.  Clearly, whatever the students were getting from the program was failing to promote even their global language pro-ficiency (Oller, 1976, pp.  340 - 374).
The research concluded that the 100 hours of instruction in the English Basic Skills not only failed to promote reading competency among students, but had a significant negative effect on the students' reading and language development.  The results confirmed prior research conducted by Matane (1986), McLaughlin (1991a), Yeoman (1988) and McNamara (1989) that the program had few benefits if any.
The Program that had degenerated into a political instrument aiming to curtail the independence of the colleges, failed even to promote continued learning among students and was in fact deleterious to the students' language growth.  Unfortunately, many students had their studies discontinued because of their failure in this test.  One could well ask, does such an appalling state of affairs imply a degree of possible culpable incompetence toward those responsible for its continuance? Fortunately, because of pressure from college Principals, staff in the Colleges and academics from the UPNG the National Education Board abolished the examination from 1991.  Ironically on 18 February 1991 the Teacher Education Division was renamed the Staff Development and Training Division, having its functions narrowed to only one of a number which it had previously held.
ANNEX TO CHAPTER 7 - THE CLOZE PROCEDURE
Since the Cloze procedure assesses linguistic, textual, and sometimes general knowledge, it can be termed a measurement of general reading comprehension (Cohen, 1980.p.79).  The cloze procedure can be used to test global reading comprehension skills, by testing the learners' awareness of textual constraints such as phrase-level, sentence-level and paragraph-level dependencies.  The cloze procedure can also be used to check for the awareness of grammatical relationships (Cohen, 1980, pp.  95-97), since the deletions of words at regular intervals pro-voke thought and requires inferences about language patterns in context (Rye, 1982, p.31):
CONSTRUCTION OF THE CLOZE TEST
To have a valid cloze test, the following characteristics need to be addressed (Rye, 1982; Nuttall, 1982; Turner & Gillard, 1972; Cohen, 1980).
Turner and Gillard (1972) have documented the need for careful consideration of the text to be used in a cloze procedure.
The underlying logic of the cloze procedure is that if the reader and the writer have similar backgrounds of experiences, interests and language skills, the reader will be able to predict accu-rately the words which have been deleted.  If the reader is not able to predict the word, it is the result of lack of knowledge, understanding or in-terest.  Therefore, success is in completing, re-organising words and responding to grammatical (syntactical) structures, to relating various ideas in the context.  Successful construction of the cloze passage depends on the readers compe-tence in English, on his socio-cultural under-standing and his comprehension ability (Turner and Gillard, 1972).
In constructing a cloze test, the content of the selected passage must be familiar to the readers: "The passage to be selected may be of particular relevance to the students being tested, or it may be a passage of general interest" (Cohen, 1980).  The particular passage Teacher Preparation was written by a Papua New Guinean for PNG teachers (Tololo, 1976, p.215).
Nuttall (1982) states that the first requirement in choosing a text is that it should be interesting to the students.  The passage selected should be interesting to the student teachers, since it concerns the professional development of teachers in PNG.
Sufficient length of text
The literature suggests that at least two-hundred and fifty (250) words and fifty (50) deletions are needed for the construction of a proper cloze test (Bormuth, 1968; Cohen, 1980).  The selected passage had a total of four hundred and five (405) words and fifty (50) deletions.  Deletions should range from every fifth to eighth word (Cohen, 1980; Rye, 1982).  In preparing the cloze test, every seventh word was deleted from the text.  However, in some instances the sixth or eighth word was deleted depending on semantical and syntac-tical appropriateness.  This was based on the rationale that the more words supplied between deletions would assist the students in detecting the overall meaning of the sentence, paragraph and the text as a whole unit of meaning.
Readability of text
The cloze reading passage needs to be at the right level of diffi-culty for the students.  After consultation with three aca-demics at the University of PNG (two from the Language and Lit-erature Department and one from the Education Department), it seemed appropriate that the level of difficulty of the read-ing passage be at grade ten PNG standard.  Such a criteria seemed sensible since an upper pass or better in English in grade 10 was the minimum National Education Board requirement for entry into Community Pre-service Teachers' Colleges.  The SMOG INDEX was used as an appropriate strategy to mea-sure readability of the selected passage.  This index is considered a reliable formula for calculating readability of a text (Rye, 1982; Nuttall, 1982).  The Smog Index formula was applied to the passage.  The calculations revealed that the reading level of the passage was at a grade eight level for students in developed countries in which English is their first language.  It was concluded again after consultation with other academics in the Language and Literature Department that the passage was at the PNG grade 10 level of difficulty.  As an additional means to ensure validity, the SMOG INDEX formula was applied to the front page story of the Post Courier, PNG's senior daily newspaper and to a ministerial statement in the Ministry of Education's, Education Gazette.  The Post Courier level of difficulty was rated as at the grade 10 level while the Education Gazette was rated a at the grade 11 level.  It was concluded that the reading passage was of a similar difficulty with text, commonly read by teachers.
METHOD OF ANALYSIS
The analysis of variance (ANOVA) was the method of statistical analysis.  This is an inferential statistical procedure by which a researcher can test the null hypothesis that two or more population means are equal.  The sample's means, one corresponding to each population mean, are computed and tested simultaneously for any statistically significant differences between them (Wiersma, 1986.  p.347).  A one - way ANOVA was used in this study because it was not possible to assign groups randomly.  Secondly, there was only one independent variable i.e.  the EBS Program.  Minium (1978) explains the helpfulness of ANOVA:
Analysis of variance is a powerful aid to the investigator.  It enables him to design studies more efficiently, to generalise more broadly, and to take account of the complexities of interacting factors (Minium, 1978, p.390).
The pretest score was used to ascertain the reading level of the student teachers on entry into the teachers' colleges (research question one).  In addition, it was also used for statistical control as well as for the generating of gain scores.  A statistically significant difference between the pretest and the posttest means that it may not be by chance that the independent variable (EBS Program) has influenced the reading development of the beginning first year student teachers (research question two).
DATA COLLECTION PROCEDURE
The Principals of the three colleges authorised the administration of the test in their colleges.  This was organised by the respective Heads of the Language Departments.
Since English is the second language for these students, the advice recommended by Rye (1982, pp.19-20) that only the author's original words be counted as correct was considered inappropriate.  The acceptable word method, or the accept-able alternative method, sometimes known as the contextu-ally appropriate method was used (Richards, Platt & Webber, 1985,p.41).  This method is used when the scorer accepts any word or alternative which is appropriate and acceptable in the context (Oller, 1979).  For example, in the sentence "It may also be (33) to encourage large numbers of educated people to (34) the less well educated", the original missing word in (33) is "necessary" while the original miss-ing word in (34) is "teach".  However, considering the fact that English is the second language, other alternatives (synonyms) were scored as correct.  These synonyms included "wise", "relevant" and "useful" as alternatives for "necessary"; and "educate", "assist", "help" and "train" as alternatives for "teach".  Incorrect spellings were accepted in the scoring of the answers (Heaton, 1975).  From these results, students were then grouped into three different categories according to the reading levels as defined below (Rye, 1982; Nuttall, 1982; Cohen, 1980; Herman, 1988).
DEFINITIONS OF READING LEVELS