Last week, I was privileged to give a talk to a mission from China at the Green Templeton College, University of Oxford, about my research on Japanese students’ academic performance in science. The Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan, has been carrying out the National Assessment of Academic Ability since 2007 to examine and enhance students’ knowledge and skills.
In 2015, MEXT commissioned a think-tank (where I had worked as a researcher for almost a decade) to conduct quantitative and qualitative analyses using the assessment data. The main focus of this research was to elucidate the determinants of the academic performance in science in Japan (very specific!). However, thanks to the productive feedback from the Chinese mission, I could realise that the findings were also applicable to other subjects including reading (literacy) and mathematics (numeracy) regardless of geographical location. Thus, in this article, I’d like to introduce the essence of this research project.
- 1. What is the National Assessment of Academic Ability?
- 2. Overview of the Research Project
- 3. Results – Keys to High Academic Performance (to be continued…)
1. What is the National Assessment of Academic Ability?
When the National Assessment of Academic Ability (the Assessment) started in 2007, MEXT explained its objectives as follows (hard to understand even for Japanese though…):
- To verify the results of education and education measures, identify associated problems, and implement necessary improvements by ascertaining and analyzing the state of academic achievement of students in all regions with a view to equalizing opportunities for compulsory education nationwide and improving the level of this education
- To enable boards of education and schools to assess the results of the education they provide and of their education measures, and to identify associated problems relative to the nationwide situation with a view to improving the academic achievement and motivation of all schoolchildren
(FY 2007 White Paper on Education, Culture, Sports, Science and Technology, p.20)
(MEXT has slightly modified this description since then, but its basic concept has not changed.)
In this light, all students in the final years of primary school and lower secondary school (6th and 9th year of compulsory education respectively) take part in the Assessment (in principle). Interestingly (not as a Japanese citizen but as a social scientist), while the Assessment was administered to all the students of targeted years from 2007 to 2009, it had been implemented as a sample survey between 2010 and 2012 (* an official assessment was not conducted in 2011 due to a great earthquake), and has become a census survey again since 2013... It was no accident that the Democratic Party of Japan (DPJ) ousted the Liberal Democratic Party (LDP) in the 2009 general election after the long-term LDP’s rule, and then the LDP swept the DPJ from power as a consequence of the 2012 election (and the DPJ no longer exists!). In fact, a national assessment had been conducted even in 1950s and 1960s. However, as its implementation gave rise to hot discussions involving courts, a nationwide survey was suspended until recently. This history itself is very interesting (again, scientifically), but I won’t go any further herein.
The Assessment is composed of two contents: cognitive tests (Japanese, mathematics, and science) and questionnaires. Cognitive tests cover students’ subject knowledge (called “Type A” questions) and ability to use knowledge/skills practically (called “Type B” questions, which have been developed referring to the PISA, an international assessment conducted by the OECD). Questionnaires are primarily for students and schools respectively. The students survey includes their learning attitudes, lifestyles, relationships with peers, and interests in society/science, whereas schools are asked about their teaching methods, learning environments, relationships with students’ family and community, shares of disadvantaged students, and so forth. The Assessment is conducted every year except for a science test, which is administered every three years. The number of participants (students) is approximately one million for each targeted year in some 20 thousand primary schools and in around 10 thousand lower secondary schools. (the Chinese mission wasn’t impressed with these numbers at all!)
2. Overview of the Research Project
So, what did I do with the assessment data? In response to the request from MEXT (i.e. to elucidate the determinants of students’ academic performance in science, leading to implications for education policy and practice in Japan), our research team has carried out two tasks: a quantitative analysis and a qualitative analysis (yes, they should always go hand in hand!). The quantitative study was conducted using the students’ test scores in science (dependent variables) and the answers of questionnaires for students and schools (independent variables) with the analytical framework as follows. In the original research, I investigated the impact of independent variables on “no answer” as well as “correct answer” (i.e. test scores) given that there is a huge achievement gap between students who tried to write something (albeit wrong answers) and those who completely gave up answering. However, in this article, I will focus only on the analysis of test scores (otherwise it becomes a bit too complicated…).
Source: The MEXT Expert Meeting (modified by author)
The qualitative analysis was so much fun (not just as a social scientist but as a parent of two naughty children!). Our team selected 10 “effective schools” (five primary and five lower secondary) based on the mean science test scores of each school in 2012 and 2015. As highlighted in the figure below, “effective schools” were defined as 1) ones that showed high performance in 2015 as compared to their (low) performance in 2012 (i.e. the mean score in 2015 was much higher than the value predicted based on the 2012 score), or 2) ones that achieved high performance both in 2012 and in 2015 (i.e. situated at the top right in the figure). Through interviews with teachers/principals and classroom observations in conjunction with the said quantitative analysis, we tried to uncover the secrets of their success.
Note: Schools are plotted based on the mean test score (correct answer rate) in 2012 (X axis) and 2015 (y axis). The trend line is specified by regressing the score in 2015 on that in 2012. Considering the variation in school size and geographical area in addition to the mean test scores, 10 schools were selected.
Source: The MEXT Expert Meeting (modified by author)
3. Results – Keys to High Academic Performance (to be continued…)
What did these analyses tell about the keys to academic success? In the next (and perhaps succeeding) article(s), I will describe more details about the quantitative analysis including the method and the result, followed by another story focused on the qualitative research of “effective schools.” Stay tuned!