ED20D | ED20F | ED20W | ED30F


Content Outline for Coursework Reports
(page numbers. refer to starting content in recommended textbook)
1. Theories of child development (Ch 1)
2. Inter-relatedness of different aspects of development (Ch 1)
3. Physical development: special reference to motor development (148-151, 175-207)
4. Social and personality development: special emphasis on self-concept formation and aggressive behaviour (433-447, 468, 559-620, 507-510)
5. Emotional development (389-413)
6. Cognitive development: emphasis on language (349-378, 382) and concept development (219-256, 265-277, 294)
7. Moral development (473-491, 506)
8. Gender role development (517-555)
9. The following influences on development: biological (heredity 120-122); environmental (home and family 563-566, nutrition 209-213, socio-economic status 569-570, school 632-643, culture 643-646, media 621-628).
Coursework 50%
Examination 50%

Berk, E. Laura.(1994). Child Development. Needham Heights, MA: Allyn and Bacon



Level 2: Credits 3: Prerequisites None

The overall aim of this course is for students to develop an awareness of the usefulness and limitations of testing to the teacher and to develop basic skills in test construction.

1. Why measure and evaluate? Kinds of tests used in education. Qualities that good tests should posses.
2. The specification and classification of educational objectives.
3. The planning of an educational test - setting up a Table of Specifications.
4. Writing achievement test items - various types of objective and essay items. Choosing appropriate item types for levels of response required. Putting the items together into the final test.
5. Understanding of basic statistical concepts such as the Mean, Standard Deviation and Correlation, and their application to the analysis of test scores.
6. Writing questionnaire items, designing scales for measuring affective outcomes, practical skills.
7. Norm/Group Referenced and Criterion referenced testing. Analysis and evaluation of data from NRT and CRT.
8. Designing a simple classroom research study to show how the skills developed in this course can be applied.





Rationale: Professionals in the field of Early Childhood education need to have an appreciation for the historical development in the field as well as the philosophical ideas which influenced them. Out of this appreciation they will develop their own philosophies of Early Childhood Education.

This course therefore seeks to:

1. Develop a knowledge and an understanding of the development, role, and importance of early childhood education to the education process as a whole.

2. Explore various models of ECE and the philosophies influencing them.


1. Early Childhood Education: what is it?
Individuals who have impacted on ECE e.g. Plato, Comenius, Pestalozzi, Froebel, Montessori, Rousseau, Dewey, Piaget, Macmillan Sister, etc.

2. History and Development of Early Childhood Education: The contribution of Institutions (e.g. BVLF, Servol, UNICEF) and Individuals e.g. DRB Grant, Henry Ward.)

3. Models of ECE: Kindergarten, Infant, Nursery, Basic School, Bank Street, Cognitively Oriented Curriculum, Montessori Schools, Head Start etc.

4. Development issues Relating to ECE
Special problems of contemporary ECE: purpose, content, control, cost, methodologies; Notions of Quality; Play vs. Direct Instruction etc.


Blenkin, G. & Kelly, A. (Eds.) (1987). Early childhood curriculum: A developmental curriculum. London: Paul Chapman.

Doxey, I. (Ed.) (1990). Child-care and education: Canadian dimensions. Canada: Nelson.

Feeney, S., Christensen D., & Moravcik, E. (1987). Who am I in the lives of children?
Columbus: Merrill

Grant, D.R.B. (1982). Training teacher-trainers and para-professional teachers. Kingston: Jamaica Publishing House.

**Roopnarine, J. & Johnson, J. (Eds.) (latest ed.). Approaches to early childhood education. Columbus: Merrill

Ministry of Education (1990) Early childhood focus. Souvenir Magazine. Early Childhood Unit.

Weber, E. (1984). Ideas influencing early childhood education: A theoretical analysis:
New York: Teachers College Press.

Davies, R. (1996) Early childhood care and education in the Caribbean: An overview of issues and concern. Caribbean Journal of Education, 17 (2) 206-226.

McDonald, K. (1996) The evaluation and revitalization of the Jamaican early childhood education programme: Summary Report. Caribbean Journal of Education, 7 (2)

** Highly recommended.

© 2003


1. Aims and Introduction
1.1 Aims
1.2 Meaningful words - their definitions and examples
1.3 Revising some simple stats (can refer to ED20F course book)
1.3.1 From marked test to excel input - the data matrix
1.3.2 Totals and averages
1.3.3 Variation and correlation
1.3.4 Frequency distributions: normal curves
1.4 Introduction

2. Purposes for classroom assessment
2.1 Formative assessment - gives a detailed look at the learning processes
2.1.1 Feedback on teaching
2.1.2 Feedback on learning
2.1.3 Students’ ideal approach to Formative assessment
2.2 Summative assessment - inferring ability by measuring products/indicators of learning.
2.2.1 Discrimination for resource allocation
2.2.2 Summative tests whose questions are just a sample of what could have been asked
2.2.3 Indicators and questions with low/no face validity
2.2.4 Students’ ideal approach to Summative assessment
2.2.5 Rasch analysis and Item Response Theory (IRT)
2.3 Secure tests and teaching to the test
2.3.1 Misnamed ‘competency’ tests
2.3.2 Teaching test-wiseness
2.4 Mixing Summative and formative assessments
2.5 Objectivity - the separation of measurement, assessment and evaluation
2.6 External evaluations

3. ‘Fair’ testing and ‘fair’ teaching
3.1 Grading to the curve - conflicting assessment priorities of teachers and administrators
3.1.1 Judging the minimum acceptable score
3.2 Differential group outcomes of Summative assessments - reducing disadvantage.
3.2.1 Group differences, overlap and variability
3.2.2 ‘Corrective’ social polices and academic attainments Streaming and setting
3.2.3 Differential Item Functioning (DIF) Analysis
3.2.4 Movement from standardized tests to ‘authentic’ assessment (AS) for resource allocation
3.3 Testing what is taught - content validity 1
3.3.1 Matching changes in content and teaching to changes in assessment: The curriculum stool.
3.3.2 De-professionalising teaching - machine marked tests
3.4 Teachers’ Personal priorities and preferences
3.5 Students Personal priorities and preferences
3.6 Standards for testing

Part 2. Reliability and validity - consistency and truth

4. Reliability and how it is measured - agreement between markers and consistency of measures
4.1 Stability over time of repeated measures
4.1.1 Variation
4.1.2 Test-retest:
4.1.3 Parallel forms: degraded test-retest reliability
4.2 Homogeneity within the test - are all the questions constant in measuring the same ability
4.2.1 Item Discrimination
4.2.2 Cronbach Alpha: C-alpha
4.2.3 Split half simulations of test-retest reliability - Kuder-Richardson and Spearman-Brown
4.3 Consistency of markers
4.4 Statistical assumptions of reliability that are contravened by applications to classroom assessment.
4.5 Threats to test reliability: everything that reduces consistency

5. Validity/truth and how it is agreed - your truth or mine.
5.1 Main types of validity
5.1.1 Face validity
5.1.2 Content validity Tables of specifications TOS
5.1.3 Concurrent validity
5.1.4 Construct validity
5.1.5 Predictive validity
5.1.6 Consequential validity
5.2 Relationships between Reliability and validity: Positivistic reliability and Constructivistic validity.

6. Taxonomies of testing
6.1 Types of tests, purposes and forms of reporting
6.1.1 Emphasising the focus of criterion test to fit the purpose of testing
6.2 Test/assignment formats
6.2.1 Closed tests - objective tests, recognition tests
6.2.2 Open response - supply tests, recall tests Rubrics - Precisely vague marking schemes
6.3 Taxonomies of inferred process objectives
6.3.1 Bloom’s taxonomy of educational objectives
6.3.2 Gagné-Briggs’ objectives
6.3.3 Mager’s objectives
6.3.4 ABCD instructional objectives
6.3.5 Bastick’s Alignment of process objectives
6.3.6 Illusions of objectivity and precision
6.3.7 Instructional objectives - as a default

7 Using different assessment and response formats
7.1 Closed tests - objective tests, recognition tests
7.1.1 Multiple choice and Dichotomous choice tests
7.1.2 Interpretative choice tests
7.1.3 Matching
7.1.4 Fixed response questions
7.2 Supply questions - varying choice of structure and resources.
7.2.1 Cloze ‘fill in the blank’ tests
7.2.2 Short answer
7.2.3 Mind-webs
7.2.4 Essay
7.3 Alternative/authentic assessment
7.3.1 Brief in-class assignments CATs
7.3.2 Portfolios
7.3.3 Observations of performance - real time measures Presentations
7.3.4. Extended research reports
7.3.5 Scoring Rubrics
7.4 Attitude scales
7.5 Group work, peer assessment and self-reflection
7.6 Constructing a test
7.7 Test software

8. Reporting Assessment
8.1 Weighting and combining results
8.2 Criterion reporting
8.3 Norm referenced reporting
8.4 Standardising results for reporting
8.5 Electronic grade books
8.6 Alternative options for reporting assessments
8.6.1 Value added: Gain scores
8.6.2 Situated Attainment

© The University of the West Indies. All rights reserved. Disclaimer | Privacy Statement
Telephone: (876) Fax: (876)
Site best viewed at 800 x 600 resolution on Internet Explorer.
statistics tracker