Scientific Program Evaluation, by Sharon Sarles, Mont. Dip., M.A. (Sociology)

I. Introductions

    A. Introduction of Speaker

    B. Introduction of Participants

    C. Introduction of Topic

         1. Current push for scientific basis for curricular and programmatic decisions

         2. Basic scientific understanding necessary for these and many decisions

         3. Order of the seminar: philosophical to Specific with some application

II. What is science?

    A. Definition: that which uses scientific method

    B. Characteristics: empirical, systematic, reproducible, open

    C. Assumptions: Future is like the past, Intersubjectivity

    D. Can we really know anything?  Group argues Y and N

         1. Conclusion ultimately of group discussion is: We operate on the basis of knowing in the everyday world, even if we know there are philosophical challenges to knowing anything at a deep level, and we do know we do not know everything.

BREAK

         2. Is everything opinion?  YN Conclusion: some opinions are better than others

      E. Locating the strength and limitations of science

          1. Discussion of the field of all possible knowledge

          2. Conclusion: Science is one way of knowing, powerful in the empirical world

 III. How does science come to know anything?

        A. Scientific Method 

             1. Qualitative and Advocacy sometimes allowed into the disciplines

             2. Classic experimental method is the basis of scientific method.

             3. Seven steps of the Classic Scientific Method

                  Choose the topic/problem

                  Review the literature (peer reviewed stands of SCIENTIFIC literature )

                  Formulate a hypothesis (operationalize, null hypothesis)

                   Design the study (specific statistics and level of confidence.)

                  Collect Data

                  Analyze Data

                  Conclude and report

             4. How this works in program evaluation

BREAK          

IV. Things to look for in good studies

            A. Scientific Method – do they show you their method, according to this schema?

            B. No biased reporting – is the chart skewed?

            C. Reliability and Validity

                 1. Validity is measuring what you say you are measuring – e.g.

                 2. Reliability is not using rubber rulers – e.g.        

            D. Reliable survey questions

                  1. Loaded

                  2. Double barreled

            E.  Good sampling frame – “Dewey elected president!”

                  1. Most often overlooked

                  2. Are some designs to overcome your dilemma, but statistical controls can be overdone

V. Apply your knowledge to reading studies or promotional materials for curricula

            Break into groups of 5 or 6 – be given a study or a promotional piece. Each group evaluates and reports to plenary.

            Discovers that often make claims they do not support.

VI.  Now you want to evaluate a program in your own center.

           A. Good News: finite number of questions normally asked

                1. Is the program reaching intended beneficiaries?

                2. Is the program being properly delivered?

                3. Are the funds being used appropriately?

                4. Can effectiveness be gauged?

                5. Did the program work?

                6. Is the program worth it?

           B. Focusing on 5, the most often asked

                 1. Challenge to operationalize the variables into valid, measurable outcomes

                 2. Challenge of sampling frames for comparison

                      a. Randomized “true” experiments

                      b. Regression discontinuity (use one variable of known quantity)

                      c. Interrupted time series (use many means of previous years)

                      d. Cross section (comparing many classes to each other)

                      e. Polled cross section time series  (combination of c & d)

                  3. Design together a program evaluation of  new curriculum in 200 child school

           C. Break out into new groups – work as group to design other simple studies

VII.  Conclusion

         A. How will you apply?

         B.  What further questions?

         C. Evaluation form

Selected Bibliography for

Bailey, Kenneth D (1994)  Methods of Social Research, 4th ed.  [NY]: Free Press.

Berk, R.A. & Rossi, P.H. (1990) Thinking about Program Evaluation. London: Sage.

Keeves, J. P. (1997) Educational Research, Methodology and Measurement 2nd ed.  (Resources in  Education Series) Bingley, UK: Emerald Group Publishing.

Popham W. J. (2001) The Truth about Testing: An Educator’s Call to Action. Alexandria, VA:         Association for Supervision and Curriculum Development.

Render, B. & Stair, R. M. (2005) Quantitative Analysis for Management, 9th ed. Boston: Allyn and Bacon.

RESOURCES

Have a conversation

What are your interests?

Yes, important to evaluate our programs, but lots of folks advertise to us, so how can be sort out what is good? Can we do this ourselves?

Answer these questions:

What do you want to measure?

Does it measure that?  Validity

Does it measure it properly or with a rubber ruler?  Reliability

Can I make sense of the conclusion?

Is it cost effective?  Practicality

Is it twisting my educational goals?

How will I use the results?

How will my parents feel about this?

Have your read through ad-speak and government-speak?

RESOURCES

https://www.rand.org/education-and-labor/research/early-childhood-education.html

https://www.rand.org/education-and-labor/research/early-childhood-education.html

https://tea.texas.gov/Reports_and_Data/Program_Evaluations/Program_Evaluations_Early_Childhood/Program_Evaluation__Early_Childhood_Education_Programs

https://www.naeyc.org/sites/default/files/globally-shared/downloads/PDFs/resources/position-statements/CAPEexpand.pdf