Evaluation and interpretation of student satisfaction with the quality of the university educational program in applied mathematics

Evaluation and interpretation of student satisfaction with the quality of the university... Abstract The objective of this article is to study undergraduate student satisfaction with the educational programme in Applied Mathematics in terms of identification of areas for teaching process improvement. Currently, there are no instruments for student evaluation of teaching accepted by the teaching community in Russia, and student feedback implementation is carried out only at certain universities. A questionnaire was developed for the purposes of course evaluation. It includes 10 parameters characterizing the format of the course, the adequacy of educational materials provided for students, the quality of teaching, specific features of course content and learning outcomes. Sixty-six final-year undergraduates evaluated 34 courses that comprise the Applied Mathematics programme based on their memories of past events with the help of a 100-point scale familiar to them. The results were studied using the following statistical methods: correlation, factor, regression and cluster analysis. As a result of the research, three factors were singled out. They are as follows: shortcomings in course arrangement and gaps in teaching skills, moral environment in the classroom and the intrinsic difficulty of the subject. They explain 90% of the variance of the student dissatisfaction parameter Need for a change and 59% of the variance of the teaching quality parameter Students’ level of knowledge. Cluster analysis allowed to single out items requiring corrective actions (‘severe problem’, ‘problem’ and ‘difficult’ courses) and to suggest strategies for their improvement. 1. Introduction Today, it is beyond argument that innovative education development is not possible without students’ active participation in the learning process. That is why the theme of student evaluation of teaching retains value. Indeed, the interconnection between student satisfaction and student motivation, student retention and recruiting efforts (Grace et al., 2012) proves that the role of student evaluations in development of higher education as a whole as well as of individual universities has been increasing. That is why, despite the large number of publications in existence (see, for example, review articles by Benton & Cashin, 2014; Kulik, 2001), student satisfaction remains an important research area. Darwin (2016) claims that the emergence of more developed systems of student evaluation at the end of the 1960s was in response to the growth of student unrest and general dissatisfaction with the quality of education. Similarly, it may be assumed that monitoring of student satisfaction and student evaluations of teaching that appeared in Russia in the mid-1980s was a result of political and socio-economic reforms in the country. In the late 1980s, the Ministry of Higher and Secondary Education of Russia introduced a questionnaire called ‘The Teacher through the Eyes of Students’ (TeES) at universities. The questionnaire was widely criticized at the time (e.g., Gorbatenko, 1990; Levchenko, 1990) for the lack of evidence of its validity and utility and for the incorrect ways in which its results were interpreted and applied. Carrying out regular questionnaire surveys at universities gradually ceased. When Russia joined the Bologna process, the problem of evaluating the quality of education and student satisfaction became urgent again. However, questionnaires are developed and used only by few universities today (e.g., Zapesotsky, 2007; Zelenev & Tumanov, 2012). There is also a large number of studies today that prove the validity and utility of student evaluations. For example, review and analysis of such publications may be seen in the articles by Benton & Cashin (2014), as well as Kulik (2001). Nonetheless, there are also a large number of studies criticizing student surveys and contesting their results. The authors of such works claim that there are many factors that bias the results of student evaluation of teaching. These arguably include, for example, the following: - Effects of grading leniency and low workload (Greenwald & Gillmore, 1997), when instructors who give students higher grades and lower workload receive higher ratings; - Halo effect (Becker & Cardy, 1986; Orsini, 1988), when certain qualities (attractiveness, high administrative or other type of status) raise student evaluation of the quality of teaching; - The Dr. Fox effect or ‘educational seduction’ (Abrami et al., 1982; Ware & Williams, 1975), when a charismatic teacher receives higher ratings for the quality of teaching due to external factors; - The impact of teachers’ clothing style on student opinion about quality of teaching (Butler & Roesel, 1989; Chowdhary, 1988). It should be noted, however, that some of the results listed above were refuted in later studies (Benton & Cashin, 2014; Peer & Babad, 2014; Remedios & Lieberman, 2008; Spooren & Mortelmans, 2006; Theall & Franklin, 2001). For example, Peer & Babad (2014) in their study of the Dr. Fox effect have reproduced the 1973 experiment in full, using the original video of the lecture. One more question, however, was added to the questionnaire. The students were asked whether they learned anything from the lecture they heard. It turned out that the students replied to the question in the negative, despite liking the lecture. In other words, ‘students indeed enjoyed the entertaining lecture, but they had not been educationally seduced into believing they had learned’ (Peer & Babad, 2014). Analysis of the research both of supporters of student surveys (for example, Benton & Cashin, 2014; Kulik, 2001; Theall & Franklin, 2001) and of the approach’s critics (for example, Crumbley et al., 2001) allows the conclusion that a weighted approach is required in using survey results. It is not an accident that both camps agree in their opinion that student ratings may not be the only factor in evaluating a teacher. Nonetheless, it seems that Zapesotsky (2007) is correct in believing that if a negative rating persists and is repeated year after year, a meaningful analysis of the situation is required. The objective of this study is to evaluate the satisfaction of undergraduates studying Applied Mathematics with the quality of the educational programme for the purposes of its further improvement. In this regard, it is necessary to select an instrument (a questionnaire), to conduct a survey among students, to study the results using statistical methods, to interpret them and to identify the direction of required corrective measures on their basis. 2. Methodology The contested nature of student surveys as an instrument for evaluation of teacher competence is due, to a certain degree, to the fact that survey results depend not only on the teachers, but also on the students, on their intellectual development, their attitude towards studying and towards their future profession. That is why we invited final-year students to participate in the survey. These students are in a position to evaluate all 34 courses in the programme and bring to that process not only learning experience but also field experience gained during internship. In a few months, they will finish studying and become our colleagues. We believe that they are capable of a more accurate evaluation of teacher competence, which agrees with the opinion of Theall & Franklin (2001) that possibly ‘beginning students do not have sufficient depth of understanding to accurately rate the instructor’s knowledge of subject matter’. Spencer & Schmelkin (2002) claim that students are usually willing to do evaluation and provide feedback. In our opinion, however, student attitude towards survey participation is yet another important factor that influences survey results. This assumption was confirmed in studying publications on student ratings. Benton & Cashin (2014), for example, believe that to improve validity, ‘the instructor should take time to encourage students to take the process seriously’. It is useful to consider the results of a study carried out by Chen & Hoshower (2003) in order to formulate student motivation. They stated the desire of students to raise the level of teaching and improve the course content and format by participating in the survey among the most powerful motivators. At the same time, they found that use of estimates for promotion and increase in teachers’ salaries was least important to students. That is why we suggest that it would be more accurate to shift the emphasis from ‘teacher in the eye of students’ to ‘the learning process in the eye of students’. Choosing an instrument for the survey was quite a challenging issue. As is well known, there is extensive experience outside of Russia (starting in 1970–1980) in conducting student surveys for evaluating the quality of teaching. There is also a large number of questionnaires whose characteristics have been examined in numerous studies (see Coffey & Gibbs, 2001; Curtis & Keeves, 2000; Griffin et al., 2003; Marsh, 1982; Marsh & Roche, 1997; Ramsden, 1991; Richardson, 1994; Wilson et al., 1997), such as the Course Experience Questionnaire (CEQ) and the Students’ Evaluation of Educational Quality Questionnaire, for example. To this day, the practice of student evaluation of education and research in this area is not widespread in Russia. As a result, there are no questionnaires accepted by the education community, which renders organization of surveys difficult. Thus, a student survey was conducted at our university several years ago. A version of CEQ containing 25 questions was used as the basis. Result analysis showed a large number of omissions in student answers. This prevented any kind of generalization or conclusion from being made. Unfortunately, survey organizers did not take steps to discover the reasons for such results: whether they were due to defects in the questionnaire or to the poor organization of the survey. We can only conclude that it is not enough to simply take a translation of a well-known questionnaire and offer it to the students. It has shown once again that the use of instruments like CEQ in Russia requires adaptation and research. The 1980s questionnaire TeES contains 18 items and presupposes answers according to a 9-point Likert-type scale. Among critical comments, there was an indication that this questionnaire contains an excessive number of subjective questions (Zelentsov, 1999). Gorbatenko (1990) noted that students had to give answers using a 9-point scale that they were not used to and she mentioned it as a disadvantage of the TeES questionnaire. Up-to-date revisions of this questionnaire that are used for the purposes of internal monitoring at some Russian universities usually represent abridged versions (Zapesotsky, 2007; Zelentsov, 1999). No studies of psychometric properties of the proposed tests have been conducted. In other words, there are no grounds for drawing conclusions on the advantages of this instrument against the CEQ in conducting surveys at Russian universities. Since our students have no experience participating in such surveys, we proposed that discussion of the purposes, procedures and instruments of the survey would allow them to feel as active contributors to the process and to have a more responsible approach to the evaluation, which would have a positive impact on the validity of results. Also, Levchenko (1990), Zelentsov (1999) and Zapesotsky (2007) in their articles note that students do not always understand the aspects that are offered for evaluation in the TeES questionnaire and are not interested in them. Taking into account this observation, students were asked to discuss the items of the questionnaire to which they had to respond. Originally, the students were given a choice of two questionnaires: the CEQ (25 items) and the TeES (18 items). In the course of discussion, it was found that students believed both of these questionnaires to be too large to evaluate at the same time the 34 courses that comprise the education programme. Therefore, the questionnaire needed to contain the fewest possible items, given the wide range of factors that student ratings had to reflect. Next, the evaluation scale had to be selected. There is a traditional scale of one to five for grading achievements in secondary and higher schools. A scale of 1 to 100 is used at our university, which is also translated into the 1 to 5 scale in giving final grades in the state-approved diploma. In discussing the questionnaire, the students were offered a choice of evaluation scales: the 9-point Likert-scale, the 5-point scale or the 100-point scale that they were used to. The students chose the latter, which agrees with the opinion of Gorbatenko (1990) and Levchenko (1990) that the more familiar scale is preferable. It should be noted that students had to evaluate 34 courses and that requires effort, attention and time. Therefore, the questionnaire should contain the minimum number of items despite the fact that student ratings need to reflect a wide range of factors characterizing teaching (d’Apollonia & Abrami, 1997; Marsh & Roche, 1997). To solve this problem, based on the conducted studies (Elliott & Shin, 2002; Gibson, 2010; Gorbatenko, 1990; Grace et al., 2012; Levchenko, 1990; Zelentsov, 1999), the following characteristics of a course and their parameters were identified: - The format of the course as the balance of hours designated for studying the theory and practical studies (variables Lack of theory and Lack of practice); - The adequacy of educational materials provided for students (variable Shortage of textbooks); - The quality of teaching (variables Teacher’s knowledge of the subject, Teaching skills and Impartial and fair assessment); - Specific features of the content of the academic discipline (variables Students’ interest in the subject matter and Level of challenge); - Results of studying (variables Need for a change and Students’ level of knowledge). It was necessary to select the minimum number of variables that would give the best description of the quality of teaching. The Teacher’s knowledge of the subject variable reflects how students evaluate a teacher’s breadth of coverage of the subject: giving background of concepts and ideas, discussing present-day achievements and application of scientific results, answering students’ questions and the teacher’s enthusiasm for the subject. The Teaching skills variable shows how students evaluate a teacher’s clarity of presentation and explanations, his/her enthusiasm for teaching. The students suggested including in the questionnaire Impartial and fair assessment question that qualifies the teacher’s objectivity and integrity in grading. It turned out that this parameter of the teacher’s personality has ethical overtones for the students and is associated with mutual respect of students and teachers and a healthy moral atmosphere in the classroom. We believe that this fact demonstrates the absence in this group of student of the effects of grading leniency, when students give higher ratings to those teachers who give their students higher grades. As a result of the discussion, three above-mentioned variables, reflecting such characteristics of an instructor as the depth of knowledge on the subject, teaching skills and personal qualities, were chosen. In addition, it was important to identify the areas in which changes are needed in order to improve the teaching process based on the results of the survey. Therefore, taking into account that student satisfaction is an emotive variable (Grace et al., 2012), the variable Need for a change was chosen as an indicator of dissatisfaction. The students’ knowledge is the most significant indicator of the quality of teaching. The students suggested that subjective self-rating of their knowledge of the course (variable Students’ level of knowledge) be used as an indicator of educational effectiveness instead of the actual exam grades. The suggestion was adopted, as there are studies proving the validity of using student self-reported ratings (for example, Benton et al., 2013). The analysis of obtained values of variable Students’ level of knowledge showed that students tend to evaluate their knowledge lower than their examination scores. Therefore, the value of this variable, as well as other variables, is also based on students’ memories of past events. It was possible to give answers to our questionnaire using either a paper form or an Excel table. The survey was conducted anonymously and on a voluntary basis. Sixty-eight students of the fourth year studying Applied Mathematics at the Lipetsk State Technical University took part in the survey. However, two questionnaires were not used in the analysis because they were not filled out completely. Based on the responses of 66 survey participants, an aggregate table was drawn up: averaged values of evaluations for each of the above-mentioned variables for each of the 34 courses were calculated. Based on this table, a correlation, factor, regression and cluster analyses were carried out by means of procedures implemented using the STATISTICA software package. 3. Results and discussion As mentioned before, the purpose of the study is to analyse student satisfaction with the quality of the education programme to identify strategies to improve it. Of course, every teacher has an opportunity to get acquainted with students’ opinions about the subject they teach that are reflected in the aggregate table and use this information to improve their teaching. However, in order to evaluate the quality of the educational programme as a whole and develop a plan of certain corrective measures, a statistical analysis of survey data is required. Let us consider the results of statistical methods in detail. 3.1. Correlation analysis results The results of the correlation analysis are presented in Table 1. Table 1 Correlation matrix 1 2 3 4 5 6 7 8 9 10 1. Lack of theory 1.00 0.88* 0.66* −0.82* −0.84* −0.55* −0.22 0.05 0.89* −0.39* 2. Lack of practice 0.68* −0.64* −0.76* −0.45* −0.09 0.07 0.76* −0.46* 3. Shortage of textbooks −0.55* −0.46* −0.38* −0.05 0.26 0.53* −0.32 4. Teacher’s knowledgeof the subject 0.91* 0.65* 0.55* 0.24 −0.89* 0.41* 5. Teaching skills 0.66* 0.54* 0.18 −0.92* 0.59* 6. Impartial and fairassessment 0.67* −0.16 −0.75* 0.65* 7. Students’ interest inthe subject matter 0.08 −0.46* 0.69* 8. Level of challenge −0.12 −0.32 9. Need for a change −0.49* 10. Students’ level ofknowledge 1 2 3 4 5 6 7 8 9 10 1. Lack of theory 1.00 0.88* 0.66* −0.82* −0.84* −0.55* −0.22 0.05 0.89* −0.39* 2. Lack of practice 0.68* −0.64* −0.76* −0.45* −0.09 0.07 0.76* −0.46* 3. Shortage of textbooks −0.55* −0.46* −0.38* −0.05 0.26 0.53* −0.32 4. Teacher’s knowledgeof the subject 0.91* 0.65* 0.55* 0.24 −0.89* 0.41* 5. Teaching skills 0.66* 0.54* 0.18 −0.92* 0.59* 6. Impartial and fairassessment 0.67* −0.16 −0.75* 0.65* 7. Students’ interest inthe subject matter 0.08 −0.46* 0.69* 8. Level of challenge −0.12 −0.32 9. Need for a change −0.49* 10. Students’ level ofknowledge Note. * p < .05. View Large Table 1 Correlation matrix 1 2 3 4 5 6 7 8 9 10 1. Lack of theory 1.00 0.88* 0.66* −0.82* −0.84* −0.55* −0.22 0.05 0.89* −0.39* 2. Lack of practice 0.68* −0.64* −0.76* −0.45* −0.09 0.07 0.76* −0.46* 3. Shortage of textbooks −0.55* −0.46* −0.38* −0.05 0.26 0.53* −0.32 4. Teacher’s knowledgeof the subject 0.91* 0.65* 0.55* 0.24 −0.89* 0.41* 5. Teaching skills 0.66* 0.54* 0.18 −0.92* 0.59* 6. Impartial and fairassessment 0.67* −0.16 −0.75* 0.65* 7. Students’ interest inthe subject matter 0.08 −0.46* 0.69* 8. Level of challenge −0.12 −0.32 9. Need for a change −0.49* 10. Students’ level ofknowledge 1 2 3 4 5 6 7 8 9 10 1. Lack of theory 1.00 0.88* 0.66* −0.82* −0.84* −0.55* −0.22 0.05 0.89* −0.39* 2. Lack of practice 0.68* −0.64* −0.76* −0.45* −0.09 0.07 0.76* −0.46* 3. Shortage of textbooks −0.55* −0.46* −0.38* −0.05 0.26 0.53* −0.32 4. Teacher’s knowledgeof the subject 0.91* 0.65* 0.55* 0.24 −0.89* 0.41* 5. Teaching skills 0.66* 0.54* 0.18 −0.92* 0.59* 6. Impartial and fairassessment 0.67* −0.16 −0.75* 0.65* 7. Students’ interest inthe subject matter 0.08 −0.46* 0.69* 8. Level of challenge −0.12 −0.32 9. Need for a change −0.49* 10. Students’ level ofknowledge Note. * p < .05. View Large As expected, there is a positive correlation among the variables describing the quality of teaching (Teacher’s knowledge of the subject, Teaching skills and Impartial and fair assessment). The closest connection is between Teacher’s knowledge of the subject and Teaching skills. The researchers’ opinions as to the reasons vary (Theall & Franklin, 2001; Ware & Williams, 1975). At the same time one cannot deny the fact that a teacher who has deeper knowledge has more opportunities to effectively present the learning material to students. And if the results are interpreted with care, the conclusion ‘better teachers receive higher ratings’ (Spooren & Mortelmans, 2006) has some robustness. In addition, an expected fact is a significant negative correlation between variables Need for a change (as an indicator of student dissatisfaction with the course of study) and Students’ level of knowledge (as an indicator of the quality of education). The ratio of the number of lectures to the number of classes, laboratories and practicals for each course is set when the education programme is created; however, it can be adjusted when needed. The variable Lack of theory reflects the students’ judgement about the sufficiency of the number of lecture hours. The variable Lack of practice reflects their judgement about the sufficiency of the amount of seminars, practical classes and laboratories. The university library service handles procurement of textbooks and study guides according to department requirements. Therefore, a significant positive correlation between the variables describing course format (Lack of theory and Lack of practice) and the variable Shortage of textbooks, describing shortage of study materials, was an unexpected result. Another unexpected result was significant negative correlation between the above variables and all variables describing instructor competence (Teacher’s knowledge of the subject, Teaching skills, Impartial and fair assessment). In other words, it can be assumed that variables Lack of theory, Lack of practice and Shortage of textbooks largely reflect student evaluation of such instructor qualities as ability to use the time allotted to a course effectively and the skill in working with study materials. The intrinsic difficulty of the subject (variable Level of challenge) is the only indicator that is not correlated with any other indicator. We can assume that it is an internal characteristic of a course and the difficulty of the material does not affect student judgements about teacher skills. At the same time the positive correlation between variables Impartial and fair assessment (which is an indicator of a healthy moral atmosphere and mutual respect between teachers and students) and Students’ interest in the subject matter, which allows supposing that interest in the subject matter does not depend on the academic discipline being studied, but on how the teacher can motivate and inspire students. These results are consistent with Kulik (2001): ‘The correlation between ratings and achievement was high for items involving instructor skill and for those measuring teacher and course organisation. Correlation coefficients were ... near zero for items dealing with course difficulty’. Existence of a significant correlation between the variables describing education results (Need for a change and Students’ level of knowledge) and all variables describing quality of teaching is one of the indicators of validity of the survey results (Kulik, 2001; Theall & Franklin, 2001). 3.2 Factor analysis results Using principal components analysis followed by a varimax rotation, three factors were singled out. Thus, the total share of explained variance was 0.892: that is, three factors together explained 89.2% of the variance. Taking into account the factor loadings shown in Table 2, one can give the following interpretation to the derived factors. Table 2 Factor analysis results Factor Loadings (Varimax) Variable Factor 1 Factor 2 Factor 3 Lack of theory 0.93* −0.22 0.01 Lack of practice 0.93* −0.06 0.06 Shortage of textbooks 0.80* 0.01 0.35 Teacher’s knowledge of the subject −0.74* 0.55 0.26 Teaching skills −0.77* 0.53 0.23 Impartial and fair assessment −0.40 0.81* −0.22 Students’ interest in the subject matter −0.02 0.95* 0.08 Level of challenge 0.01 0.01 0.97* Expl.Var 3.69 2.19 1.24 Prp.Totl 0.46 0.27 0.15 Factor Loadings (Varimax) Variable Factor 1 Factor 2 Factor 3 Lack of theory 0.93* −0.22 0.01 Lack of practice 0.93* −0.06 0.06 Shortage of textbooks 0.80* 0.01 0.35 Teacher’s knowledge of the subject −0.74* 0.55 0.26 Teaching skills −0.77* 0.53 0.23 Impartial and fair assessment −0.40 0.81* −0.22 Students’ interest in the subject matter −0.02 0.95* 0.08 Level of challenge 0.01 0.01 0.97* Expl.Var 3.69 2.19 1.24 Prp.Totl 0.46 0.27 0.15 Note. Marked loadings are > 0.700000 View Large Table 2 Factor analysis results Factor Loadings (Varimax) Variable Factor 1 Factor 2 Factor 3 Lack of theory 0.93* −0.22 0.01 Lack of practice 0.93* −0.06 0.06 Shortage of textbooks 0.80* 0.01 0.35 Teacher’s knowledge of the subject −0.74* 0.55 0.26 Teaching skills −0.77* 0.53 0.23 Impartial and fair assessment −0.40 0.81* −0.22 Students’ interest in the subject matter −0.02 0.95* 0.08 Level of challenge 0.01 0.01 0.97* Expl.Var 3.69 2.19 1.24 Prp.Totl 0.46 0.27 0.15 Factor Loadings (Varimax) Variable Factor 1 Factor 2 Factor 3 Lack of theory 0.93* −0.22 0.01 Lack of practice 0.93* −0.06 0.06 Shortage of textbooks 0.80* 0.01 0.35 Teacher’s knowledge of the subject −0.74* 0.55 0.26 Teaching skills −0.77* 0.53 0.23 Impartial and fair assessment −0.40 0.81* −0.22 Students’ interest in the subject matter −0.02 0.95* 0.08 Level of challenge 0.01 0.01 0.97* Expl.Var 3.69 2.19 1.24 Prp.Totl 0.46 0.27 0.15 Note. Marked loadings are > 0.700000 View Large Factor 1—shortcomings in course arrangement and gaps in teaching skills (lack of theory, practice, textbooks, unskilled level of teachers and low level of presenting the educational material). Factor 1 accounted for 46.2% of total observed variance in the data. Factor 2—favourable moral climate (impartial and fair assessment and interest in the subject, i.e., creative atmosphere). Factor 2 accounted for 27.4% of total observed variance in the data. Factor 3—the intrinsic difficulty of the subject. Factor 3 accounted for 15.6% of total observed variance in the data. Consolidation of variables Impartial and fair assessment and Students’ interest in the subject matter into one factor can be viewed as confirmation of the assumption made in the course of correlation analysis: the interest of the students in the subject matter depends on the ability of the instructor to establish a positive, creative environment in the classroom. 3.3 Regression analysis results Two models were built during the research. In the first one, Need for a change was used as a dependent variable that was considered as an indicator of student dissatisfaction with the learning process. In the second one, the Students’ level of knowledge variable was considered as indicator of teaching quality. We considered the factors specified in the previous stage of the analysis as independent variables: Factor 1—shortcomings in course arrangement and gaps in teaching skills; Factor 2—favourable moral climate; Factor 3—the intrinsic difficulty of the subject. The regression summary for the first model is presented in Table 3. Table 3 Regression summary for dependent variable: Need for a change B Std.Err. t(30) p-level Intercept 16.28 0.59 27.62 0.000 FACTOR 1 8.69 0.60 14.52 0.000 FACTOR 2 −5.53 0.59 −9.25 0.000 FACTOR 3 −1.38 0.59 −2.30 0.028 B Std.Err. t(30) p-level Intercept 16.28 0.59 27.62 0.000 FACTOR 1 8.69 0.60 14.52 0.000 FACTOR 2 −5.53 0.59 −9.25 0.000 FACTOR 3 −1.38 0.59 −2.30 0.028 Note. $$\textrm{R}^{2}\!=\!0 .909$$. Adjusted $$\textrm{R}^{2}\!=\!0 .900$$. View Large Table 3 Regression summary for dependent variable: Need for a change B Std.Err. t(30) p-level Intercept 16.28 0.59 27.62 0.000 FACTOR 1 8.69 0.60 14.52 0.000 FACTOR 2 −5.53 0.59 −9.25 0.000 FACTOR 3 −1.38 0.59 −2.30 0.028 B Std.Err. t(30) p-level Intercept 16.28 0.59 27.62 0.000 FACTOR 1 8.69 0.60 14.52 0.000 FACTOR 2 −5.53 0.59 −9.25 0.000 FACTOR 3 −1.38 0.59 −2.30 0.028 Note. $$\textrm{R}^{2}\!=\!0 .909$$. Adjusted $$\textrm{R}^{2}\!=\!0 .900$$. View Large All factors are statistically significant (p < 0.05), adjusted $$\textrm{R}^{2}$$ value is about 0.900, i.e., the equation explains about 90% of variance of variable Need for a change. Factor 1 has the greatest impact on the dependent variable (the corresponding coefficient is b = 8.69): the shortcomings in course arrangement and gaps in teaching skills increase the need for changes, i.e., reduce satisfaction with the learning process. The second factor that has an impact on the need for change is Factor 2 (the corresponding coefficient is b = −5.53): favourable morale reduces the need for change, i.e., contributes to student satisfaction. And, finally, the impact of Factor 3 (the intrinsic difficulty of the subject) can be interpreted as follows: all other things being equal (the organization of the educational process and the moral climate), studying a difficult subject matter causes less student dissatisfaction (the corresponding coefficient is b = −1.38). It means that the students recognized and appreciated the challenge of intrinsic difficulty of mathematics. The regression summary for the second regression model is shown in Table 4. Table 4 Regression summary for dependent variable: Level of knowledge B Std.Err. t(30) p-level Intercept 72.72 0.77 94.31 0.000 FACTOR 1 −2.11 0.78 −2.70 0.011 FACTOR 2 4.61 0.78 5.89 0.000 FACTOR 3 −2.20 0.78 −2.81 0.009 B Std.Err. t(30) p-level Intercept 72.72 0.77 94.31 0.000 FACTOR 1 −2.11 0.78 −2.70 0.011 FACTOR 2 4.61 0.78 5.89 0.000 FACTOR 3 −2.20 0.78 −2.81 0.009 Note. $$\textrm{R}^{2}\!=\! 0.624$$. Adjusted $$\textrm{R}^{2}\!=\!0 .587$$. View Large Table 4 Regression summary for dependent variable: Level of knowledge B Std.Err. t(30) p-level Intercept 72.72 0.77 94.31 0.000 FACTOR 1 −2.11 0.78 −2.70 0.011 FACTOR 2 4.61 0.78 5.89 0.000 FACTOR 3 −2.20 0.78 −2.81 0.009 B Std.Err. t(30) p-level Intercept 72.72 0.77 94.31 0.000 FACTOR 1 −2.11 0.78 −2.70 0.011 FACTOR 2 4.61 0.78 5.89 0.000 FACTOR 3 −2.20 0.78 −2.81 0.009 Note. $$\textrm{R}^{2}\!=\! 0.624$$. Adjusted $$\textrm{R}^{2}\!=\!0 .587$$. View Large All the factors in this model are also statistically significant (p < 0.05), adjusted R$$\textrm{R}^{2}$$ is about 0.587, i.e., the equation explains about 59% of variance of variable Students’ level of knowledge. Analysis of the model built allows us to conclude that Factor 2—favourable moral climate (the corresponding coefficient is b = 4.61) has the greatest and positive impact on the level of knowledge. It is consistent with the opinion of such researchers as Hackett & Betz (1989), Mann (2006), Middleton & Spanias (1999) and Siegle & McCoach (2007). They say that motivation, self-efficacy and creative atmosphere are important for students’ progress in learning mathematics. Factor 3 impact (the corresponding coefficient is b = −2.20) ranks the second: the more difficult the subject matter, the lower the level of knowledge (all other things being equal). Disadvantages in the organization of the educational process (Factor 1) ranks third by impact on the quality of learning (the corresponding coefficient is b = −2.11): the worse the learning process is organized, the lower the level of knowledge. Note that the coefficient of determination is less in the second model as compared to the first one. Perhaps, this is due to the fact that the level of knowledge is determined not only by external factors (organization of the educational process, moral climate and difficulty of the subject matter) but also by personal qualities of the student, his/her abilities, intelligence level and ability to effectively organize their learning process. 3.4 Cluster analysis results Academic disciplines were clustered using the k-means algorithm. The following variables were selected for establishing the clusters: Lack of theory, Lack of practice, Shortage of textbooks, Teacher’s knowledge of the subject, Teaching skills, Impartial and fair assessment, Students’ interest in the subject matter and Level of challenge. The research showed that division into four clusters is the most informative. Analysis of variance showed significant differences in the average values of the clusters for each variable involved in classification (p < 0.05). The average values of each cluster are shown in Table 5. Table 5 Average means of variables for each cluster Variable Cluster 1 Cluster 2 Cluster 3 Cluster 4 Lack of theory 19.00* 11.31 5.39 6.17 Lack of practice 34.26* 15.19* 6.50 5.34 Shortage of textbooks 21.99* 13.33 10.39 12.84 Teacher’s knowledge of the subject 81.06* 77.78* 93.52* 93.26* Teaching skills 64.06* 61.51* 86.54* 86.19* Impartial and fair assessment 80.42* 74.87* 90.85* 83.84* Students’ interest in the subject matter 75.78* 47.65* 77.80* 68.66* Level of challenge 48.13* 31.99* 36.61* 58.27* Need for a change 30.13* 28.69* 11.55 11.47 Students’ level of knowledge 68.32* 61.76* 77.55* 69.71* Variable Cluster 1 Cluster 2 Cluster 3 Cluster 4 Lack of theory 19.00* 11.31 5.39 6.17 Lack of practice 34.26* 15.19* 6.50 5.34 Shortage of textbooks 21.99* 13.33 10.39 12.84 Teacher’s knowledge of the subject 81.06* 77.78* 93.52* 93.26* Teaching skills 64.06* 61.51* 86.54* 86.19* Impartial and fair assessment 80.42* 74.87* 90.85* 83.84* Students’ interest in the subject matter 75.78* 47.65* 77.80* 68.66* Level of challenge 48.13* 31.99* 36.61* 58.27* Need for a change 30.13* 28.69* 11.55 11.47 Students’ level of knowledge 68.32* 61.76* 77.55* 69.71* Note. Marked means are > 15.00 View Large Table 5 Average means of variables for each cluster Variable Cluster 1 Cluster 2 Cluster 3 Cluster 4 Lack of theory 19.00* 11.31 5.39 6.17 Lack of practice 34.26* 15.19* 6.50 5.34 Shortage of textbooks 21.99* 13.33 10.39 12.84 Teacher’s knowledge of the subject 81.06* 77.78* 93.52* 93.26* Teaching skills 64.06* 61.51* 86.54* 86.19* Impartial and fair assessment 80.42* 74.87* 90.85* 83.84* Students’ interest in the subject matter 75.78* 47.65* 77.80* 68.66* Level of challenge 48.13* 31.99* 36.61* 58.27* Need for a change 30.13* 28.69* 11.55 11.47 Students’ level of knowledge 68.32* 61.76* 77.55* 69.71* Variable Cluster 1 Cluster 2 Cluster 3 Cluster 4 Lack of theory 19.00* 11.31 5.39 6.17 Lack of practice 34.26* 15.19* 6.50 5.34 Shortage of textbooks 21.99* 13.33 10.39 12.84 Teacher’s knowledge of the subject 81.06* 77.78* 93.52* 93.26* Teaching skills 64.06* 61.51* 86.54* 86.19* Impartial and fair assessment 80.42* 74.87* 90.85* 83.84* Students’ interest in the subject matter 75.78* 47.65* 77.80* 68.66* Level of challenge 48.13* 31.99* 36.61* 58.27* Need for a change 30.13* 28.69* 11.55 11.47 Students’ level of knowledge 68.32* 61.76* 77.55* 69.71* Note. Marked means are > 15.00 View Large Members of each cluster had to be studied to interpret the results. It is also to be recalled that students gave an assessment using a 100-point scale usual for them that is applied in our university to assess the current academic performance. This internal university 100-point scale is also converted into the official traditional in Russia 5-point scale in accordance with the rule: 0–52 points—‘unsatisfactory’, 53–79 points—‘satisfactory’, 80–92 points—‘good’, 93–100 points—‘excellent’. It is reasonable to assume that students kept in mind both scales when they answered the questionnaire. Cluster 1—‘Problem courses’. This cluster combines five academic disciplines. Its members are characterized by a higher (as compared to other clusters) Students’ interest in the subject matter (75.78), a high level of Teacher’s knowledge of the subject (81.06) and Impartial and fair assessment (80.42). However, Teaching skills was rated as ‘satisfactory’ (64.06), the students mentioned lack of practice (34.26), theory (19.01) and textbooks (21.99). In addition, these subjects are quite difficult to study (48.13). All courses included in this cluster are related to software development. They are Algorithmic Languages and Programming, Computer Graphics, Computer Networks, Object-oriented Programming and Computer Architecture. It should be noted that these most dynamically developing branches of science today are related to information technology. Quite young teachers and teachers who combine teaching with work as programmers at large companies teach these courses. All of them, as a rule, have little teaching experience. They may need additional advice and training to improve their teaching skills. Cluster 2—‘severe problem courses’. This cluster brings together four academic disciplines. A specific feature of this cluster is low quality of teaching. Teacher’s knowledge of the subject (77.78), Teaching skills (61.51), Impartial and fair assessment (74.87) were rated as ‘satisfactory’ and Students’ interest in the subject matter was rated as ‘unsatisfactory’ (47.65). The structure of this cluster includes such courses as Metrology, Physics, Philosophy and Sociology. This list allows suggesting that the problems are related to personal and professional qualities of teachers. And although, as evidenced by Benton & Cashin (2014), Kulik (2001) and many other researchers, student ratings correlate with other parameters of teaching effectiveness (instructor self-ratings, ratings by colleagues, administrators, alumni and trained observers), it is necessary to carry out a further analysis of reasons for student dissatisfaction. Further analysis of reasons for student dissatisfaction should be carried out by the management of the department or faculty, based on results of visiting lessons given by these teachers, for example. Cluster 4—‘Difficult courses’. This cluster brings together seven academic disciplines. Its major characteristics are as follows: a high Level of challenge (58.27), high parameters of the quality of teaching: Teaching skills, Impartial and fair assessment—‘good’ (86.19 and 83.84, respectively), Teacher’s knowledge of the subject—‘excellent’ (93.26). Thus, the opinion that students give higher ratings to those who teach easier subjects was not confirmed. However, Students’ interest in the subject matter is quite low (68.65). This cluster includes such subjects as Mathematical Analysis, Functional Analysis, Theory of Functions of a Complex Variable, Differential Equations with Partial Derivatives, Math Modelling, Methods of Optimisation and Mathematical Theory of Systems. These courses cover the most abstract branches of mathematics that require a high level of theoretical thinking, and therefore they are difficult for learning and understanding. There is a risk that external expression of mathematical facts may dominate their content, when memorizing formulas is substituted for understanding. With the loss of meaning, there is loss of interest in the subject. However, in the previous stages of our research we found that variable Students’ interest in the subject matter correlates with variables describing the quality of teaching (Teacher’s knowledge of the subject, Teaching skills and Impartial and fair assessment) but not with variable Level of challenge. Therefore, lack of students’ interest in the subject matters included into Cluster 4 cannot be explained by the complexity of the content of these subject matters. In teaching the areas of mathematics included in Cluster 4, a lot of emphasis is placed, as a rule, on the exactness and strictness of proof and on validity of logical conclusions. This line of action cannot be regarded as a faulty approach to teaching mathematics. On the contrary, it is very important in nurturing the students’ mathematical thinking and professional ethics. The students also appreciate the quality of teaching the courses included in Cluster 4. But to support them in overcoming the intrinsic difficulty of pure mathematics, teachers who run these courses, perhaps, should pay more attention to the process of understanding with the help of computer modelling and visualization, as well as to increasing motivation by demonstrating the connection between theoretical results and practice (Kuznetsova, 2014). Cluster 3—‘Successful courses’. This cluster brings together 18 academic disciplines. This cluster includes the humanities (English, History, Russian Language and Economics), basic (low-level) mathematical subjects (e.g., Algebra and Analytic Geometry, Discrete Math), professional (high-level) mathematical subjects (e.g., Stochastic Processes, Mathematical Methods and Models in Economics), as well as several subjects related to programming and information technology (e.g., Databases, Application Software). This cluster is characterized by high parameters of the quality of teaching (variables Teacher’s knowledge of the subject, Teaching skills, Impartial and fair assessment) and low parameters of shortcomings in course arrangement (variables Lack of theory, Lack of practice, Shortage of textbooks). However, it should be noted that the process of teaching the courses included into the cluster should also be improved. For example, the value of Students’ interest in the subject matter (77.80) is lower than the ‘good’ rate by a few points. Thus, clustering analysis demonstrates two ends of the spectrum of the Applied Mathematics programme: Programming (Cluster 1) and Pure Mathematics (Cluster 4), which allows determining directions for teaching improvements. Cluster composition leads to the conclusion that course level and academic discipline had no impact on student evaluation of teaching. Clusters 1, 3 and 4, for example, include both low-level courses and high-level courses. Clusters 2 and 3 include Humanities courses, Social Science courses, Mathematics courses and Science courses. Furthermore, subject matter difficulty and course workload also had no impact on student ratings. Instructors of the most difficult and time-consuming courses in pure Mathematics (cluster 4), for example, had the highest student ratings. This leads to the conclusion that survey results may be regarded as robust. Thus, using statistical methods to carry out analysis of survey results allowed the factors influencing the results of studying to be specified and led to the formulation of strategies to improve the educational process. It is no coincidence that many researchers (e.g., Cathcart et al., 2014; Golding & Adam, 2016; Marsh & Roche, 1997; McKeachie, 1997) agree that it is important not only to conduct a survey but also to use its results correctly. 4. Conclusion Higher education reform has been implemented in Russia recently. In these circumstances, it is important to know the opinion of students so that changes could have an impact on improvement and development. Lack of time-proven tools and experience in conducting surveys and studies on this subject make it difficult to solve the problem. However, our survey showed that feedback results and their analysis by means of statistical techniques can help to detect problematic issues in the learning process and specify needs and expectations of students. The article presents research of the results of a survey, in which 66 students assessed 34 academic disciplines that make up the educational program in Applied Mathematics. In the process of assessment, the students used a 100-point scale familiar to them. The questionnaire included 10 parameters describing the format of the course of study, the adequacy of educational materials provided for students, the quality of teaching, specific features of the content of the educational discipline and learning outcomes. Such parameters as dissatisfaction with the process of learning (Need for a change) and the Students’ level of knowledge were selected as indicators of learning outcomes. To study the results of the survey, correlation, factor, regression and cluster analyses were used. The results of the correlation analysis in many ways coincide with the results obtained by international scientists: by assessing the course, students assess, first of all, teaching skills and then—specific features of the subject matter. As a result of the factor analysis of survey data the following three factors were identified: shortcomings in course arrangement and gaps in teaching skills, favourable moral atmosphere at the classroom and the intrinsic difficulty of the subject. The total share of explained variance is 89%. Construction of regression models showed that the extracted factors explain 90% of the variance of the variable Need for a change and 59% of the variance of the variable Students’ level of knowledge. Factor 1—shortcomings in course arrangement and gaps in teaching skills—has the greatest impact on the dissatisfaction parameter Need for a change and Factor 2—favourable moral climate and creative atmosphere in the classroom—has the greatest impact on the quality of the education parameter Students’ level of knowledge. As a result of the cluster analysis, we could clearly identify objects requiring control action (‘severe problem’, ‘problem’ and ‘difficult’ courses) and develop methods for improvement for each one. To summarize the above, we can say that, in general, students are satisfied with the educational programme. At the same time, student survey data helped us understand the essence of our teaching problems and identify ways to improve the teaching process. Elena Kuznetsova is an Associate Professor of the Department of Applied Mathematics at the Lipetsk State Technical University, Russia. She graduated from Lomonosov Moscow State University, the Faculty of Computational Mathematics and Cybernetics and received a PhD in differential equations from Voronezh State University. Her recent research interests are in teaching mathematics and its applications and training of undergraduate students majoring in applied mathematics. References Abrami , P. C. , Leventhal , L. & Perry , R. P. ( 1982 ) Educational seduction . Rev. Educ. Res. , 52 , 446 -- 464 . Google Scholar CrossRef Search ADS D’Apollonia , S. & Abrami , P. C. ( 1997 ) Navigating student ratings of instruction . Am. Psychol. , 52 , 1198 -- 1208 . Google Scholar CrossRef Search ADS Becker , B. E. & Cardy , R. L. ( 1986 ) Influence of halo error on appraisal effectiveness: a conceptual and empirical reconsideration . J. Appl. Psychol. , 71 , 662 -- 671 . Google Scholar CrossRef Search ADS Benton , S. L. , Duchon , D. & Pallett , W. H. ( 2013 ) Validity of student self-reported ratings of learning . Assess. Eval. High. Educ. , 38 , 377 -- 388 . Google Scholar CrossRef Search ADS Benton , S. L. & Cashin , W. E. ( 2014 ) Student ratings of instruction in college and university courses . Higher Education: Handbook of Theory and Research (M. B. Paulsen ed.), vol. 29 . Dordrecht, the Netherlands : Springer , pp. 279 -- 326 . Butler , S. & Roesel , K. ( 1989 ) The influence of dress on students’ perceptions of teacher characteristics . Clothing Textiles Res. J ., 7 , 57 -- 59 . Google Scholar CrossRef Search ADS Cathcart , A. , Greer , D. & Neale , L. ( 2014 ) Learner-focused evaluation cycles: facilitating learning using feedforward, concurrent and feedback evaluation . Assess. Eval. High. Educ. , 39 , 790 -- 802 . Google Scholar CrossRef Search ADS Chen , Y. & Hoshower , L. B. ( 2003 ) Student evaluation of teaching effectiveness: an assessment of student perception and motivation . Assess. Eval. High. Educ. , 28 , 71 -- 88 . Google Scholar CrossRef Search ADS Chowdhary , U . ( 1988 ) Instructor’s attire as a biasing factor in students’ ratings of an instructor . Clothing Textiles Res. J. , 6 , 17 -- 22 . Google Scholar CrossRef Search ADS Coffey , M. & Gibbs , G. ( 2001 ) The evaluation of the Student Evaluation of Educational Quality Questionnaire (SEEQ) in UK higher education . Assess. Eval. High. Educ. , 26 , 89 -- 93 . Google Scholar CrossRef Search ADS Crumbley , L. , Henry , B. K. & Kratchman , S. H. ( 2001 ) Students’ perceptions of the evaluation of college teaching . Qual. Assur. Educ., 9 , 197 -- 207 . Google Scholar CrossRef Search ADS Curtis , D. D. & Keeves , J. P. ( 2000 ) The Course Experience Questionnaire as an institutional performance indicator . Int. Educ. J. , 1 , 73 -- 82 . Darwin , S. ( 2016 ) The emergence of student evaluation in higher education . Student Evaluation in Higher Education (S. Darwin ed.). Switzerland : Springer International Publishing , pp. 1 -- 11 . Elliott , K. M. & Shin , D. ( 2002 ) Student satisfaction: an alternative approach to assessing this important concept . J. High. Educ. Policy Manag. , 24 , 197 -- 209 . Google Scholar CrossRef Search ADS Gibson , A. ( 2010 ) Measuring business student satisfaction: a review and summary of the major predictors . J. High. Educ. Policy Manag. , 32 , 251 -- 259 . Google Scholar CrossRef Search ADS Golding , C. & Adam , L. ( 2016 ) Evaluate to improve: useful approaches to student evaluation. Assess. Eval. High. Educ. , 41 , 1 -- 14 . Google Scholar CrossRef Search ADS Gorbatenko , A. S. ( 1990 ) Anketa éprepodavatel’ glazami studentové glazami social’nogo psihologa—prepodavatelja vuza [About the questionnaire ‘the teacher through the eyes of students’ through the eyes of a social psychologist, teacher of high school.] . Vop. Psikhol+ , 1 , 184 -- 186 . Russian . Grace , D. , Weaven , S. , Bodey , K. , Ross , M. & Weaven , K. ( 2012 ) Putting student evaluations into perspective: the course experience quality and satisfaction model (CEQS) . Stud. Educ. Eval. , 38 , 35 -- 43 . Google Scholar CrossRef Search ADS Greenwald , A. G. & Gillmore , G. M. ( 1997 ) Grading leniency is a removable contaminant of student ratings . Am. Psychol., 52 , 1209 -- 1217 . Google Scholar CrossRef Search ADS Griffin , P. , Coates , H. , Mcinnis , C. & James , R. ( 2003 ) The development of an extended course experience questionnaire . Qual. High. Educ. , 9 , 259 -- 266 . Google Scholar CrossRef Search ADS Hackett , G. & Betz , N. E. ( 1989 ) An exploration of the mathematics self-efficacy/mathematics performance correspondence . J. Res. Math. Educ. , 20 , 261 -- 273 . Google Scholar CrossRef Search ADS Kulik , J. A. ( 2001 ) Student ratings: validity, utility, and controversy . New Direct. Inst. Res. , 109 , 9 -- 25 . Google Scholar CrossRef Search ADS Kuznetsova , E. V. ( 2014 ) Formirovanie ponjatij v obuchenii stohastike [Formation of concepts in teaching stochastics] . Innovacii v obrazovanii , 7 , 20 -- 29 . Russian. Levchenko , E. V. ( 1990 ) O psihologicheskih problemah, voznikajushhih pri provedenii oprosa ‘Prepodavatel’ glazami studenta’ [On the psychological problems encountered in the survey ‘Teacher through the eyes of students.’] . Vop. Psikhol+ , 6 , 181 -- 182 . Russian . Mann , E. L. ( 2006 ) Creativity: the essence of mathematics . J. Educ. Gifted , 30 , 236 -- 260 . Google Scholar CrossRef Search ADS Marsh , H. W. ( 1982 ) SEEQ: a reliable, valid, and useful instrument for collecting students’ evaluations of university teaching . Br. J. Educ. Psychol. , 52 , 77 -- 95 . Google Scholar CrossRef Search ADS Marsh , H. W. & Roche , L. A. ( 1997 ) Making students’ evaluations of teaching effectiveness effective: the critical issues of validity, bias, and utility . Am. Psychol. , 52 , 1187 -- 1197 . Google Scholar CrossRef Search ADS McKeachie , W. J. ( 1997 ) Student ratings: the validity of use . Am. Psychol. , 52 , 1218 -- 1225 . Google Scholar CrossRef Search ADS Middleton , J. A. & Spanias , P. A. ( 1999 ) Motivation for achievement in mathematics: findings, generalizations, and criticisms of the research . J. Res. Math. Educ. , 30 , 65 -- 88 . Google Scholar CrossRef Search ADS Orsini , J. L. ( 1988 ) Halo effects in student evaluations of faculty: a case application . J. Mark. Educ., 10 , 38 -- 45 . Google Scholar CrossRef Search ADS Peer , E. & Babad , E. ( 2014 ) The Doctor Fox research (1973) re-revisited: ‘educational seduction’ ruled out . J. Educ. Psychol. , 106 , 36 -- 45 . Google Scholar CrossRef Search ADS Ramsden , P . ( 1991 ) A performance indicator of teaching quality in higher education: The Course Experience Questionnaire . Stud. High. Educ. , 16 , 129 -- 150 . Google Scholar CrossRef Search ADS Remedios , R. & Lieberman , D. A. ( 2008 ) I liked your course because you taught me well: the influence of grades, workload, expectations and goals on students’ evaluations of teaching . Br. Educ. Res. J. , 34 , 91 -- 115 . Google Scholar CrossRef Search ADS Richardson , J. T. ( 1994 ) A British evaluation of the course experience questionnaire . Stud. High. Educ. , 19 , 59 -- 68 . Google Scholar CrossRef Search ADS Siegle , D. & McCoach , D. B. ( 2007 ) Increasing student mathematics self-efficacy through teacher training . J. Adv. Acad. , 18 , 278 -- 312 . Spencer , K. J. & Schmelkin , L. P. ( 2002 ) Student perspectives on teaching and its evaluation . Assess. Eval. High. Educ. , 27 , 397 -- 409 . Google Scholar CrossRef Search ADS Spooren , P. & Mortelmans , D. ( 2006 ) Teacher professionalism and student evaluation of teaching: will better teachers receive higher ratings and will better students give higher ratings? Educ. Stud. , 32 , 201 -- 214 . Google Scholar CrossRef Search ADS Theall , M. & Franklin , J. ( 2001 ) Looking for bias in all the wrong places: a search for truth or a witch hunt in student ratings of instruction? New Direct. Inst. Res. , 109 , 45 -- 56 . Google Scholar CrossRef Search ADS Ware Jr , J. E. & Williams , R. G. ( 1975 ) The Dr. Fox effect: a study of lecturer effectiveness and ratings of instruction . Acad. Med. , 50 , 149 -- 56 . Google Scholar CrossRef Search ADS Wilson , K. L. , Lizzio , A. & Ramsden , P. ( 1997 ) The development, validation and application of the Course Experience Questionnaire . Stud. High. Educ. , 22 , 33 -- 53 . Google Scholar CrossRef Search ADS Zapesotsky , A. S. ( 2007 ) Prepodavatel’ glazami studenta. Ob izuchenii mnenij studentov o kachestve pedagogicheskoj dejatel’nosti prepodavatelja [Teacher by student’s eyes. On the study of the views of students on the quality of pedagogical activity of the teacher] . Vysshee obrazovanie segodnja , 9 , 28 -- 32 . Russian . Zelenev , I. R. & Tumanov , S. V. ( 2012 ). Ob ocenke kachestva prepodavanija v vuze v kontekste vosprijatija studentami svoih prepodavatelej [An estimate of the quality of teaching at the university in the context of the perception of the students of their teachers] . Vysshee obrazovanie v Rossii , 11 , 99 -- 105 . Russian . Zelentsov , B. ( 1999 ). Studenty o prepodavatele: metodika oprosa [Students’ assessment of teachers: a survey methodology] . Vysshee obrazovaniev Rossii , 6 , 44 -- 47 . Russian . © The Author(s) 2018. Published by Oxford University Press on behalf of The Institute of Mathematics and its Applications. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Teaching Mathematics and Its Applications: International Journal of the IMA Oxford University Press

Evaluation and interpretation of student satisfaction with the quality of the university educational program in applied mathematics

Loading next page...
 
/lp/ou_press/evaluation-and-interpretation-of-student-satisfaction-with-the-quality-UPEYCZxeZP
Publisher
Institute of Mathematics and its Applications
Copyright
© The Author(s) 2018. Published by Oxford University Press on behalf of The Institute of Mathematics and its Applications. All rights reserved. For permissions, please email: journals.permissions@oup.com
ISSN
0268-3679
eISSN
1471-6976
D.O.I.
10.1093/teamat/hry005
Publisher site
See Article on Publisher Site

Abstract

Abstract The objective of this article is to study undergraduate student satisfaction with the educational programme in Applied Mathematics in terms of identification of areas for teaching process improvement. Currently, there are no instruments for student evaluation of teaching accepted by the teaching community in Russia, and student feedback implementation is carried out only at certain universities. A questionnaire was developed for the purposes of course evaluation. It includes 10 parameters characterizing the format of the course, the adequacy of educational materials provided for students, the quality of teaching, specific features of course content and learning outcomes. Sixty-six final-year undergraduates evaluated 34 courses that comprise the Applied Mathematics programme based on their memories of past events with the help of a 100-point scale familiar to them. The results were studied using the following statistical methods: correlation, factor, regression and cluster analysis. As a result of the research, three factors were singled out. They are as follows: shortcomings in course arrangement and gaps in teaching skills, moral environment in the classroom and the intrinsic difficulty of the subject. They explain 90% of the variance of the student dissatisfaction parameter Need for a change and 59% of the variance of the teaching quality parameter Students’ level of knowledge. Cluster analysis allowed to single out items requiring corrective actions (‘severe problem’, ‘problem’ and ‘difficult’ courses) and to suggest strategies for their improvement. 1. Introduction Today, it is beyond argument that innovative education development is not possible without students’ active participation in the learning process. That is why the theme of student evaluation of teaching retains value. Indeed, the interconnection between student satisfaction and student motivation, student retention and recruiting efforts (Grace et al., 2012) proves that the role of student evaluations in development of higher education as a whole as well as of individual universities has been increasing. That is why, despite the large number of publications in existence (see, for example, review articles by Benton & Cashin, 2014; Kulik, 2001), student satisfaction remains an important research area. Darwin (2016) claims that the emergence of more developed systems of student evaluation at the end of the 1960s was in response to the growth of student unrest and general dissatisfaction with the quality of education. Similarly, it may be assumed that monitoring of student satisfaction and student evaluations of teaching that appeared in Russia in the mid-1980s was a result of political and socio-economic reforms in the country. In the late 1980s, the Ministry of Higher and Secondary Education of Russia introduced a questionnaire called ‘The Teacher through the Eyes of Students’ (TeES) at universities. The questionnaire was widely criticized at the time (e.g., Gorbatenko, 1990; Levchenko, 1990) for the lack of evidence of its validity and utility and for the incorrect ways in which its results were interpreted and applied. Carrying out regular questionnaire surveys at universities gradually ceased. When Russia joined the Bologna process, the problem of evaluating the quality of education and student satisfaction became urgent again. However, questionnaires are developed and used only by few universities today (e.g., Zapesotsky, 2007; Zelenev & Tumanov, 2012). There is also a large number of studies today that prove the validity and utility of student evaluations. For example, review and analysis of such publications may be seen in the articles by Benton & Cashin (2014), as well as Kulik (2001). Nonetheless, there are also a large number of studies criticizing student surveys and contesting their results. The authors of such works claim that there are many factors that bias the results of student evaluation of teaching. These arguably include, for example, the following: - Effects of grading leniency and low workload (Greenwald & Gillmore, 1997), when instructors who give students higher grades and lower workload receive higher ratings; - Halo effect (Becker & Cardy, 1986; Orsini, 1988), when certain qualities (attractiveness, high administrative or other type of status) raise student evaluation of the quality of teaching; - The Dr. Fox effect or ‘educational seduction’ (Abrami et al., 1982; Ware & Williams, 1975), when a charismatic teacher receives higher ratings for the quality of teaching due to external factors; - The impact of teachers’ clothing style on student opinion about quality of teaching (Butler & Roesel, 1989; Chowdhary, 1988). It should be noted, however, that some of the results listed above were refuted in later studies (Benton & Cashin, 2014; Peer & Babad, 2014; Remedios & Lieberman, 2008; Spooren & Mortelmans, 2006; Theall & Franklin, 2001). For example, Peer & Babad (2014) in their study of the Dr. Fox effect have reproduced the 1973 experiment in full, using the original video of the lecture. One more question, however, was added to the questionnaire. The students were asked whether they learned anything from the lecture they heard. It turned out that the students replied to the question in the negative, despite liking the lecture. In other words, ‘students indeed enjoyed the entertaining lecture, but they had not been educationally seduced into believing they had learned’ (Peer & Babad, 2014). Analysis of the research both of supporters of student surveys (for example, Benton & Cashin, 2014; Kulik, 2001; Theall & Franklin, 2001) and of the approach’s critics (for example, Crumbley et al., 2001) allows the conclusion that a weighted approach is required in using survey results. It is not an accident that both camps agree in their opinion that student ratings may not be the only factor in evaluating a teacher. Nonetheless, it seems that Zapesotsky (2007) is correct in believing that if a negative rating persists and is repeated year after year, a meaningful analysis of the situation is required. The objective of this study is to evaluate the satisfaction of undergraduates studying Applied Mathematics with the quality of the educational programme for the purposes of its further improvement. In this regard, it is necessary to select an instrument (a questionnaire), to conduct a survey among students, to study the results using statistical methods, to interpret them and to identify the direction of required corrective measures on their basis. 2. Methodology The contested nature of student surveys as an instrument for evaluation of teacher competence is due, to a certain degree, to the fact that survey results depend not only on the teachers, but also on the students, on their intellectual development, their attitude towards studying and towards their future profession. That is why we invited final-year students to participate in the survey. These students are in a position to evaluate all 34 courses in the programme and bring to that process not only learning experience but also field experience gained during internship. In a few months, they will finish studying and become our colleagues. We believe that they are capable of a more accurate evaluation of teacher competence, which agrees with the opinion of Theall & Franklin (2001) that possibly ‘beginning students do not have sufficient depth of understanding to accurately rate the instructor’s knowledge of subject matter’. Spencer & Schmelkin (2002) claim that students are usually willing to do evaluation and provide feedback. In our opinion, however, student attitude towards survey participation is yet another important factor that influences survey results. This assumption was confirmed in studying publications on student ratings. Benton & Cashin (2014), for example, believe that to improve validity, ‘the instructor should take time to encourage students to take the process seriously’. It is useful to consider the results of a study carried out by Chen & Hoshower (2003) in order to formulate student motivation. They stated the desire of students to raise the level of teaching and improve the course content and format by participating in the survey among the most powerful motivators. At the same time, they found that use of estimates for promotion and increase in teachers’ salaries was least important to students. That is why we suggest that it would be more accurate to shift the emphasis from ‘teacher in the eye of students’ to ‘the learning process in the eye of students’. Choosing an instrument for the survey was quite a challenging issue. As is well known, there is extensive experience outside of Russia (starting in 1970–1980) in conducting student surveys for evaluating the quality of teaching. There is also a large number of questionnaires whose characteristics have been examined in numerous studies (see Coffey & Gibbs, 2001; Curtis & Keeves, 2000; Griffin et al., 2003; Marsh, 1982; Marsh & Roche, 1997; Ramsden, 1991; Richardson, 1994; Wilson et al., 1997), such as the Course Experience Questionnaire (CEQ) and the Students’ Evaluation of Educational Quality Questionnaire, for example. To this day, the practice of student evaluation of education and research in this area is not widespread in Russia. As a result, there are no questionnaires accepted by the education community, which renders organization of surveys difficult. Thus, a student survey was conducted at our university several years ago. A version of CEQ containing 25 questions was used as the basis. Result analysis showed a large number of omissions in student answers. This prevented any kind of generalization or conclusion from being made. Unfortunately, survey organizers did not take steps to discover the reasons for such results: whether they were due to defects in the questionnaire or to the poor organization of the survey. We can only conclude that it is not enough to simply take a translation of a well-known questionnaire and offer it to the students. It has shown once again that the use of instruments like CEQ in Russia requires adaptation and research. The 1980s questionnaire TeES contains 18 items and presupposes answers according to a 9-point Likert-type scale. Among critical comments, there was an indication that this questionnaire contains an excessive number of subjective questions (Zelentsov, 1999). Gorbatenko (1990) noted that students had to give answers using a 9-point scale that they were not used to and she mentioned it as a disadvantage of the TeES questionnaire. Up-to-date revisions of this questionnaire that are used for the purposes of internal monitoring at some Russian universities usually represent abridged versions (Zapesotsky, 2007; Zelentsov, 1999). No studies of psychometric properties of the proposed tests have been conducted. In other words, there are no grounds for drawing conclusions on the advantages of this instrument against the CEQ in conducting surveys at Russian universities. Since our students have no experience participating in such surveys, we proposed that discussion of the purposes, procedures and instruments of the survey would allow them to feel as active contributors to the process and to have a more responsible approach to the evaluation, which would have a positive impact on the validity of results. Also, Levchenko (1990), Zelentsov (1999) and Zapesotsky (2007) in their articles note that students do not always understand the aspects that are offered for evaluation in the TeES questionnaire and are not interested in them. Taking into account this observation, students were asked to discuss the items of the questionnaire to which they had to respond. Originally, the students were given a choice of two questionnaires: the CEQ (25 items) and the TeES (18 items). In the course of discussion, it was found that students believed both of these questionnaires to be too large to evaluate at the same time the 34 courses that comprise the education programme. Therefore, the questionnaire needed to contain the fewest possible items, given the wide range of factors that student ratings had to reflect. Next, the evaluation scale had to be selected. There is a traditional scale of one to five for grading achievements in secondary and higher schools. A scale of 1 to 100 is used at our university, which is also translated into the 1 to 5 scale in giving final grades in the state-approved diploma. In discussing the questionnaire, the students were offered a choice of evaluation scales: the 9-point Likert-scale, the 5-point scale or the 100-point scale that they were used to. The students chose the latter, which agrees with the opinion of Gorbatenko (1990) and Levchenko (1990) that the more familiar scale is preferable. It should be noted that students had to evaluate 34 courses and that requires effort, attention and time. Therefore, the questionnaire should contain the minimum number of items despite the fact that student ratings need to reflect a wide range of factors characterizing teaching (d’Apollonia & Abrami, 1997; Marsh & Roche, 1997). To solve this problem, based on the conducted studies (Elliott & Shin, 2002; Gibson, 2010; Gorbatenko, 1990; Grace et al., 2012; Levchenko, 1990; Zelentsov, 1999), the following characteristics of a course and their parameters were identified: - The format of the course as the balance of hours designated for studying the theory and practical studies (variables Lack of theory and Lack of practice); - The adequacy of educational materials provided for students (variable Shortage of textbooks); - The quality of teaching (variables Teacher’s knowledge of the subject, Teaching skills and Impartial and fair assessment); - Specific features of the content of the academic discipline (variables Students’ interest in the subject matter and Level of challenge); - Results of studying (variables Need for a change and Students’ level of knowledge). It was necessary to select the minimum number of variables that would give the best description of the quality of teaching. The Teacher’s knowledge of the subject variable reflects how students evaluate a teacher’s breadth of coverage of the subject: giving background of concepts and ideas, discussing present-day achievements and application of scientific results, answering students’ questions and the teacher’s enthusiasm for the subject. The Teaching skills variable shows how students evaluate a teacher’s clarity of presentation and explanations, his/her enthusiasm for teaching. The students suggested including in the questionnaire Impartial and fair assessment question that qualifies the teacher’s objectivity and integrity in grading. It turned out that this parameter of the teacher’s personality has ethical overtones for the students and is associated with mutual respect of students and teachers and a healthy moral atmosphere in the classroom. We believe that this fact demonstrates the absence in this group of student of the effects of grading leniency, when students give higher ratings to those teachers who give their students higher grades. As a result of the discussion, three above-mentioned variables, reflecting such characteristics of an instructor as the depth of knowledge on the subject, teaching skills and personal qualities, were chosen. In addition, it was important to identify the areas in which changes are needed in order to improve the teaching process based on the results of the survey. Therefore, taking into account that student satisfaction is an emotive variable (Grace et al., 2012), the variable Need for a change was chosen as an indicator of dissatisfaction. The students’ knowledge is the most significant indicator of the quality of teaching. The students suggested that subjective self-rating of their knowledge of the course (variable Students’ level of knowledge) be used as an indicator of educational effectiveness instead of the actual exam grades. The suggestion was adopted, as there are studies proving the validity of using student self-reported ratings (for example, Benton et al., 2013). The analysis of obtained values of variable Students’ level of knowledge showed that students tend to evaluate their knowledge lower than their examination scores. Therefore, the value of this variable, as well as other variables, is also based on students’ memories of past events. It was possible to give answers to our questionnaire using either a paper form or an Excel table. The survey was conducted anonymously and on a voluntary basis. Sixty-eight students of the fourth year studying Applied Mathematics at the Lipetsk State Technical University took part in the survey. However, two questionnaires were not used in the analysis because they were not filled out completely. Based on the responses of 66 survey participants, an aggregate table was drawn up: averaged values of evaluations for each of the above-mentioned variables for each of the 34 courses were calculated. Based on this table, a correlation, factor, regression and cluster analyses were carried out by means of procedures implemented using the STATISTICA software package. 3. Results and discussion As mentioned before, the purpose of the study is to analyse student satisfaction with the quality of the education programme to identify strategies to improve it. Of course, every teacher has an opportunity to get acquainted with students’ opinions about the subject they teach that are reflected in the aggregate table and use this information to improve their teaching. However, in order to evaluate the quality of the educational programme as a whole and develop a plan of certain corrective measures, a statistical analysis of survey data is required. Let us consider the results of statistical methods in detail. 3.1. Correlation analysis results The results of the correlation analysis are presented in Table 1. Table 1 Correlation matrix 1 2 3 4 5 6 7 8 9 10 1. Lack of theory 1.00 0.88* 0.66* −0.82* −0.84* −0.55* −0.22 0.05 0.89* −0.39* 2. Lack of practice 0.68* −0.64* −0.76* −0.45* −0.09 0.07 0.76* −0.46* 3. Shortage of textbooks −0.55* −0.46* −0.38* −0.05 0.26 0.53* −0.32 4. Teacher’s knowledgeof the subject 0.91* 0.65* 0.55* 0.24 −0.89* 0.41* 5. Teaching skills 0.66* 0.54* 0.18 −0.92* 0.59* 6. Impartial and fairassessment 0.67* −0.16 −0.75* 0.65* 7. Students’ interest inthe subject matter 0.08 −0.46* 0.69* 8. Level of challenge −0.12 −0.32 9. Need for a change −0.49* 10. Students’ level ofknowledge 1 2 3 4 5 6 7 8 9 10 1. Lack of theory 1.00 0.88* 0.66* −0.82* −0.84* −0.55* −0.22 0.05 0.89* −0.39* 2. Lack of practice 0.68* −0.64* −0.76* −0.45* −0.09 0.07 0.76* −0.46* 3. Shortage of textbooks −0.55* −0.46* −0.38* −0.05 0.26 0.53* −0.32 4. Teacher’s knowledgeof the subject 0.91* 0.65* 0.55* 0.24 −0.89* 0.41* 5. Teaching skills 0.66* 0.54* 0.18 −0.92* 0.59* 6. Impartial and fairassessment 0.67* −0.16 −0.75* 0.65* 7. Students’ interest inthe subject matter 0.08 −0.46* 0.69* 8. Level of challenge −0.12 −0.32 9. Need for a change −0.49* 10. Students’ level ofknowledge Note. * p < .05. View Large Table 1 Correlation matrix 1 2 3 4 5 6 7 8 9 10 1. Lack of theory 1.00 0.88* 0.66* −0.82* −0.84* −0.55* −0.22 0.05 0.89* −0.39* 2. Lack of practice 0.68* −0.64* −0.76* −0.45* −0.09 0.07 0.76* −0.46* 3. Shortage of textbooks −0.55* −0.46* −0.38* −0.05 0.26 0.53* −0.32 4. Teacher’s knowledgeof the subject 0.91* 0.65* 0.55* 0.24 −0.89* 0.41* 5. Teaching skills 0.66* 0.54* 0.18 −0.92* 0.59* 6. Impartial and fairassessment 0.67* −0.16 −0.75* 0.65* 7. Students’ interest inthe subject matter 0.08 −0.46* 0.69* 8. Level of challenge −0.12 −0.32 9. Need for a change −0.49* 10. Students’ level ofknowledge 1 2 3 4 5 6 7 8 9 10 1. Lack of theory 1.00 0.88* 0.66* −0.82* −0.84* −0.55* −0.22 0.05 0.89* −0.39* 2. Lack of practice 0.68* −0.64* −0.76* −0.45* −0.09 0.07 0.76* −0.46* 3. Shortage of textbooks −0.55* −0.46* −0.38* −0.05 0.26 0.53* −0.32 4. Teacher’s knowledgeof the subject 0.91* 0.65* 0.55* 0.24 −0.89* 0.41* 5. Teaching skills 0.66* 0.54* 0.18 −0.92* 0.59* 6. Impartial and fairassessment 0.67* −0.16 −0.75* 0.65* 7. Students’ interest inthe subject matter 0.08 −0.46* 0.69* 8. Level of challenge −0.12 −0.32 9. Need for a change −0.49* 10. Students’ level ofknowledge Note. * p < .05. View Large As expected, there is a positive correlation among the variables describing the quality of teaching (Teacher’s knowledge of the subject, Teaching skills and Impartial and fair assessment). The closest connection is between Teacher’s knowledge of the subject and Teaching skills. The researchers’ opinions as to the reasons vary (Theall & Franklin, 2001; Ware & Williams, 1975). At the same time one cannot deny the fact that a teacher who has deeper knowledge has more opportunities to effectively present the learning material to students. And if the results are interpreted with care, the conclusion ‘better teachers receive higher ratings’ (Spooren & Mortelmans, 2006) has some robustness. In addition, an expected fact is a significant negative correlation between variables Need for a change (as an indicator of student dissatisfaction with the course of study) and Students’ level of knowledge (as an indicator of the quality of education). The ratio of the number of lectures to the number of classes, laboratories and practicals for each course is set when the education programme is created; however, it can be adjusted when needed. The variable Lack of theory reflects the students’ judgement about the sufficiency of the number of lecture hours. The variable Lack of practice reflects their judgement about the sufficiency of the amount of seminars, practical classes and laboratories. The university library service handles procurement of textbooks and study guides according to department requirements. Therefore, a significant positive correlation between the variables describing course format (Lack of theory and Lack of practice) and the variable Shortage of textbooks, describing shortage of study materials, was an unexpected result. Another unexpected result was significant negative correlation between the above variables and all variables describing instructor competence (Teacher’s knowledge of the subject, Teaching skills, Impartial and fair assessment). In other words, it can be assumed that variables Lack of theory, Lack of practice and Shortage of textbooks largely reflect student evaluation of such instructor qualities as ability to use the time allotted to a course effectively and the skill in working with study materials. The intrinsic difficulty of the subject (variable Level of challenge) is the only indicator that is not correlated with any other indicator. We can assume that it is an internal characteristic of a course and the difficulty of the material does not affect student judgements about teacher skills. At the same time the positive correlation between variables Impartial and fair assessment (which is an indicator of a healthy moral atmosphere and mutual respect between teachers and students) and Students’ interest in the subject matter, which allows supposing that interest in the subject matter does not depend on the academic discipline being studied, but on how the teacher can motivate and inspire students. These results are consistent with Kulik (2001): ‘The correlation between ratings and achievement was high for items involving instructor skill and for those measuring teacher and course organisation. Correlation coefficients were ... near zero for items dealing with course difficulty’. Existence of a significant correlation between the variables describing education results (Need for a change and Students’ level of knowledge) and all variables describing quality of teaching is one of the indicators of validity of the survey results (Kulik, 2001; Theall & Franklin, 2001). 3.2 Factor analysis results Using principal components analysis followed by a varimax rotation, three factors were singled out. Thus, the total share of explained variance was 0.892: that is, three factors together explained 89.2% of the variance. Taking into account the factor loadings shown in Table 2, one can give the following interpretation to the derived factors. Table 2 Factor analysis results Factor Loadings (Varimax) Variable Factor 1 Factor 2 Factor 3 Lack of theory 0.93* −0.22 0.01 Lack of practice 0.93* −0.06 0.06 Shortage of textbooks 0.80* 0.01 0.35 Teacher’s knowledge of the subject −0.74* 0.55 0.26 Teaching skills −0.77* 0.53 0.23 Impartial and fair assessment −0.40 0.81* −0.22 Students’ interest in the subject matter −0.02 0.95* 0.08 Level of challenge 0.01 0.01 0.97* Expl.Var 3.69 2.19 1.24 Prp.Totl 0.46 0.27 0.15 Factor Loadings (Varimax) Variable Factor 1 Factor 2 Factor 3 Lack of theory 0.93* −0.22 0.01 Lack of practice 0.93* −0.06 0.06 Shortage of textbooks 0.80* 0.01 0.35 Teacher’s knowledge of the subject −0.74* 0.55 0.26 Teaching skills −0.77* 0.53 0.23 Impartial and fair assessment −0.40 0.81* −0.22 Students’ interest in the subject matter −0.02 0.95* 0.08 Level of challenge 0.01 0.01 0.97* Expl.Var 3.69 2.19 1.24 Prp.Totl 0.46 0.27 0.15 Note. Marked loadings are > 0.700000 View Large Table 2 Factor analysis results Factor Loadings (Varimax) Variable Factor 1 Factor 2 Factor 3 Lack of theory 0.93* −0.22 0.01 Lack of practice 0.93* −0.06 0.06 Shortage of textbooks 0.80* 0.01 0.35 Teacher’s knowledge of the subject −0.74* 0.55 0.26 Teaching skills −0.77* 0.53 0.23 Impartial and fair assessment −0.40 0.81* −0.22 Students’ interest in the subject matter −0.02 0.95* 0.08 Level of challenge 0.01 0.01 0.97* Expl.Var 3.69 2.19 1.24 Prp.Totl 0.46 0.27 0.15 Factor Loadings (Varimax) Variable Factor 1 Factor 2 Factor 3 Lack of theory 0.93* −0.22 0.01 Lack of practice 0.93* −0.06 0.06 Shortage of textbooks 0.80* 0.01 0.35 Teacher’s knowledge of the subject −0.74* 0.55 0.26 Teaching skills −0.77* 0.53 0.23 Impartial and fair assessment −0.40 0.81* −0.22 Students’ interest in the subject matter −0.02 0.95* 0.08 Level of challenge 0.01 0.01 0.97* Expl.Var 3.69 2.19 1.24 Prp.Totl 0.46 0.27 0.15 Note. Marked loadings are > 0.700000 View Large Factor 1—shortcomings in course arrangement and gaps in teaching skills (lack of theory, practice, textbooks, unskilled level of teachers and low level of presenting the educational material). Factor 1 accounted for 46.2% of total observed variance in the data. Factor 2—favourable moral climate (impartial and fair assessment and interest in the subject, i.e., creative atmosphere). Factor 2 accounted for 27.4% of total observed variance in the data. Factor 3—the intrinsic difficulty of the subject. Factor 3 accounted for 15.6% of total observed variance in the data. Consolidation of variables Impartial and fair assessment and Students’ interest in the subject matter into one factor can be viewed as confirmation of the assumption made in the course of correlation analysis: the interest of the students in the subject matter depends on the ability of the instructor to establish a positive, creative environment in the classroom. 3.3 Regression analysis results Two models were built during the research. In the first one, Need for a change was used as a dependent variable that was considered as an indicator of student dissatisfaction with the learning process. In the second one, the Students’ level of knowledge variable was considered as indicator of teaching quality. We considered the factors specified in the previous stage of the analysis as independent variables: Factor 1—shortcomings in course arrangement and gaps in teaching skills; Factor 2—favourable moral climate; Factor 3—the intrinsic difficulty of the subject. The regression summary for the first model is presented in Table 3. Table 3 Regression summary for dependent variable: Need for a change B Std.Err. t(30) p-level Intercept 16.28 0.59 27.62 0.000 FACTOR 1 8.69 0.60 14.52 0.000 FACTOR 2 −5.53 0.59 −9.25 0.000 FACTOR 3 −1.38 0.59 −2.30 0.028 B Std.Err. t(30) p-level Intercept 16.28 0.59 27.62 0.000 FACTOR 1 8.69 0.60 14.52 0.000 FACTOR 2 −5.53 0.59 −9.25 0.000 FACTOR 3 −1.38 0.59 −2.30 0.028 Note. $$\textrm{R}^{2}\!=\!0 .909$$. Adjusted $$\textrm{R}^{2}\!=\!0 .900$$. View Large Table 3 Regression summary for dependent variable: Need for a change B Std.Err. t(30) p-level Intercept 16.28 0.59 27.62 0.000 FACTOR 1 8.69 0.60 14.52 0.000 FACTOR 2 −5.53 0.59 −9.25 0.000 FACTOR 3 −1.38 0.59 −2.30 0.028 B Std.Err. t(30) p-level Intercept 16.28 0.59 27.62 0.000 FACTOR 1 8.69 0.60 14.52 0.000 FACTOR 2 −5.53 0.59 −9.25 0.000 FACTOR 3 −1.38 0.59 −2.30 0.028 Note. $$\textrm{R}^{2}\!=\!0 .909$$. Adjusted $$\textrm{R}^{2}\!=\!0 .900$$. View Large All factors are statistically significant (p < 0.05), adjusted $$\textrm{R}^{2}$$ value is about 0.900, i.e., the equation explains about 90% of variance of variable Need for a change. Factor 1 has the greatest impact on the dependent variable (the corresponding coefficient is b = 8.69): the shortcomings in course arrangement and gaps in teaching skills increase the need for changes, i.e., reduce satisfaction with the learning process. The second factor that has an impact on the need for change is Factor 2 (the corresponding coefficient is b = −5.53): favourable morale reduces the need for change, i.e., contributes to student satisfaction. And, finally, the impact of Factor 3 (the intrinsic difficulty of the subject) can be interpreted as follows: all other things being equal (the organization of the educational process and the moral climate), studying a difficult subject matter causes less student dissatisfaction (the corresponding coefficient is b = −1.38). It means that the students recognized and appreciated the challenge of intrinsic difficulty of mathematics. The regression summary for the second regression model is shown in Table 4. Table 4 Regression summary for dependent variable: Level of knowledge B Std.Err. t(30) p-level Intercept 72.72 0.77 94.31 0.000 FACTOR 1 −2.11 0.78 −2.70 0.011 FACTOR 2 4.61 0.78 5.89 0.000 FACTOR 3 −2.20 0.78 −2.81 0.009 B Std.Err. t(30) p-level Intercept 72.72 0.77 94.31 0.000 FACTOR 1 −2.11 0.78 −2.70 0.011 FACTOR 2 4.61 0.78 5.89 0.000 FACTOR 3 −2.20 0.78 −2.81 0.009 Note. $$\textrm{R}^{2}\!=\! 0.624$$. Adjusted $$\textrm{R}^{2}\!=\!0 .587$$. View Large Table 4 Regression summary for dependent variable: Level of knowledge B Std.Err. t(30) p-level Intercept 72.72 0.77 94.31 0.000 FACTOR 1 −2.11 0.78 −2.70 0.011 FACTOR 2 4.61 0.78 5.89 0.000 FACTOR 3 −2.20 0.78 −2.81 0.009 B Std.Err. t(30) p-level Intercept 72.72 0.77 94.31 0.000 FACTOR 1 −2.11 0.78 −2.70 0.011 FACTOR 2 4.61 0.78 5.89 0.000 FACTOR 3 −2.20 0.78 −2.81 0.009 Note. $$\textrm{R}^{2}\!=\! 0.624$$. Adjusted $$\textrm{R}^{2}\!=\!0 .587$$. View Large All the factors in this model are also statistically significant (p < 0.05), adjusted R$$\textrm{R}^{2}$$ is about 0.587, i.e., the equation explains about 59% of variance of variable Students’ level of knowledge. Analysis of the model built allows us to conclude that Factor 2—favourable moral climate (the corresponding coefficient is b = 4.61) has the greatest and positive impact on the level of knowledge. It is consistent with the opinion of such researchers as Hackett & Betz (1989), Mann (2006), Middleton & Spanias (1999) and Siegle & McCoach (2007). They say that motivation, self-efficacy and creative atmosphere are important for students’ progress in learning mathematics. Factor 3 impact (the corresponding coefficient is b = −2.20) ranks the second: the more difficult the subject matter, the lower the level of knowledge (all other things being equal). Disadvantages in the organization of the educational process (Factor 1) ranks third by impact on the quality of learning (the corresponding coefficient is b = −2.11): the worse the learning process is organized, the lower the level of knowledge. Note that the coefficient of determination is less in the second model as compared to the first one. Perhaps, this is due to the fact that the level of knowledge is determined not only by external factors (organization of the educational process, moral climate and difficulty of the subject matter) but also by personal qualities of the student, his/her abilities, intelligence level and ability to effectively organize their learning process. 3.4 Cluster analysis results Academic disciplines were clustered using the k-means algorithm. The following variables were selected for establishing the clusters: Lack of theory, Lack of practice, Shortage of textbooks, Teacher’s knowledge of the subject, Teaching skills, Impartial and fair assessment, Students’ interest in the subject matter and Level of challenge. The research showed that division into four clusters is the most informative. Analysis of variance showed significant differences in the average values of the clusters for each variable involved in classification (p < 0.05). The average values of each cluster are shown in Table 5. Table 5 Average means of variables for each cluster Variable Cluster 1 Cluster 2 Cluster 3 Cluster 4 Lack of theory 19.00* 11.31 5.39 6.17 Lack of practice 34.26* 15.19* 6.50 5.34 Shortage of textbooks 21.99* 13.33 10.39 12.84 Teacher’s knowledge of the subject 81.06* 77.78* 93.52* 93.26* Teaching skills 64.06* 61.51* 86.54* 86.19* Impartial and fair assessment 80.42* 74.87* 90.85* 83.84* Students’ interest in the subject matter 75.78* 47.65* 77.80* 68.66* Level of challenge 48.13* 31.99* 36.61* 58.27* Need for a change 30.13* 28.69* 11.55 11.47 Students’ level of knowledge 68.32* 61.76* 77.55* 69.71* Variable Cluster 1 Cluster 2 Cluster 3 Cluster 4 Lack of theory 19.00* 11.31 5.39 6.17 Lack of practice 34.26* 15.19* 6.50 5.34 Shortage of textbooks 21.99* 13.33 10.39 12.84 Teacher’s knowledge of the subject 81.06* 77.78* 93.52* 93.26* Teaching skills 64.06* 61.51* 86.54* 86.19* Impartial and fair assessment 80.42* 74.87* 90.85* 83.84* Students’ interest in the subject matter 75.78* 47.65* 77.80* 68.66* Level of challenge 48.13* 31.99* 36.61* 58.27* Need for a change 30.13* 28.69* 11.55 11.47 Students’ level of knowledge 68.32* 61.76* 77.55* 69.71* Note. Marked means are > 15.00 View Large Table 5 Average means of variables for each cluster Variable Cluster 1 Cluster 2 Cluster 3 Cluster 4 Lack of theory 19.00* 11.31 5.39 6.17 Lack of practice 34.26* 15.19* 6.50 5.34 Shortage of textbooks 21.99* 13.33 10.39 12.84 Teacher’s knowledge of the subject 81.06* 77.78* 93.52* 93.26* Teaching skills 64.06* 61.51* 86.54* 86.19* Impartial and fair assessment 80.42* 74.87* 90.85* 83.84* Students’ interest in the subject matter 75.78* 47.65* 77.80* 68.66* Level of challenge 48.13* 31.99* 36.61* 58.27* Need for a change 30.13* 28.69* 11.55 11.47 Students’ level of knowledge 68.32* 61.76* 77.55* 69.71* Variable Cluster 1 Cluster 2 Cluster 3 Cluster 4 Lack of theory 19.00* 11.31 5.39 6.17 Lack of practice 34.26* 15.19* 6.50 5.34 Shortage of textbooks 21.99* 13.33 10.39 12.84 Teacher’s knowledge of the subject 81.06* 77.78* 93.52* 93.26* Teaching skills 64.06* 61.51* 86.54* 86.19* Impartial and fair assessment 80.42* 74.87* 90.85* 83.84* Students’ interest in the subject matter 75.78* 47.65* 77.80* 68.66* Level of challenge 48.13* 31.99* 36.61* 58.27* Need for a change 30.13* 28.69* 11.55 11.47 Students’ level of knowledge 68.32* 61.76* 77.55* 69.71* Note. Marked means are > 15.00 View Large Members of each cluster had to be studied to interpret the results. It is also to be recalled that students gave an assessment using a 100-point scale usual for them that is applied in our university to assess the current academic performance. This internal university 100-point scale is also converted into the official traditional in Russia 5-point scale in accordance with the rule: 0–52 points—‘unsatisfactory’, 53–79 points—‘satisfactory’, 80–92 points—‘good’, 93–100 points—‘excellent’. It is reasonable to assume that students kept in mind both scales when they answered the questionnaire. Cluster 1—‘Problem courses’. This cluster combines five academic disciplines. Its members are characterized by a higher (as compared to other clusters) Students’ interest in the subject matter (75.78), a high level of Teacher’s knowledge of the subject (81.06) and Impartial and fair assessment (80.42). However, Teaching skills was rated as ‘satisfactory’ (64.06), the students mentioned lack of practice (34.26), theory (19.01) and textbooks (21.99). In addition, these subjects are quite difficult to study (48.13). All courses included in this cluster are related to software development. They are Algorithmic Languages and Programming, Computer Graphics, Computer Networks, Object-oriented Programming and Computer Architecture. It should be noted that these most dynamically developing branches of science today are related to information technology. Quite young teachers and teachers who combine teaching with work as programmers at large companies teach these courses. All of them, as a rule, have little teaching experience. They may need additional advice and training to improve their teaching skills. Cluster 2—‘severe problem courses’. This cluster brings together four academic disciplines. A specific feature of this cluster is low quality of teaching. Teacher’s knowledge of the subject (77.78), Teaching skills (61.51), Impartial and fair assessment (74.87) were rated as ‘satisfactory’ and Students’ interest in the subject matter was rated as ‘unsatisfactory’ (47.65). The structure of this cluster includes such courses as Metrology, Physics, Philosophy and Sociology. This list allows suggesting that the problems are related to personal and professional qualities of teachers. And although, as evidenced by Benton & Cashin (2014), Kulik (2001) and many other researchers, student ratings correlate with other parameters of teaching effectiveness (instructor self-ratings, ratings by colleagues, administrators, alumni and trained observers), it is necessary to carry out a further analysis of reasons for student dissatisfaction. Further analysis of reasons for student dissatisfaction should be carried out by the management of the department or faculty, based on results of visiting lessons given by these teachers, for example. Cluster 4—‘Difficult courses’. This cluster brings together seven academic disciplines. Its major characteristics are as follows: a high Level of challenge (58.27), high parameters of the quality of teaching: Teaching skills, Impartial and fair assessment—‘good’ (86.19 and 83.84, respectively), Teacher’s knowledge of the subject—‘excellent’ (93.26). Thus, the opinion that students give higher ratings to those who teach easier subjects was not confirmed. However, Students’ interest in the subject matter is quite low (68.65). This cluster includes such subjects as Mathematical Analysis, Functional Analysis, Theory of Functions of a Complex Variable, Differential Equations with Partial Derivatives, Math Modelling, Methods of Optimisation and Mathematical Theory of Systems. These courses cover the most abstract branches of mathematics that require a high level of theoretical thinking, and therefore they are difficult for learning and understanding. There is a risk that external expression of mathematical facts may dominate their content, when memorizing formulas is substituted for understanding. With the loss of meaning, there is loss of interest in the subject. However, in the previous stages of our research we found that variable Students’ interest in the subject matter correlates with variables describing the quality of teaching (Teacher’s knowledge of the subject, Teaching skills and Impartial and fair assessment) but not with variable Level of challenge. Therefore, lack of students’ interest in the subject matters included into Cluster 4 cannot be explained by the complexity of the content of these subject matters. In teaching the areas of mathematics included in Cluster 4, a lot of emphasis is placed, as a rule, on the exactness and strictness of proof and on validity of logical conclusions. This line of action cannot be regarded as a faulty approach to teaching mathematics. On the contrary, it is very important in nurturing the students’ mathematical thinking and professional ethics. The students also appreciate the quality of teaching the courses included in Cluster 4. But to support them in overcoming the intrinsic difficulty of pure mathematics, teachers who run these courses, perhaps, should pay more attention to the process of understanding with the help of computer modelling and visualization, as well as to increasing motivation by demonstrating the connection between theoretical results and practice (Kuznetsova, 2014). Cluster 3—‘Successful courses’. This cluster brings together 18 academic disciplines. This cluster includes the humanities (English, History, Russian Language and Economics), basic (low-level) mathematical subjects (e.g., Algebra and Analytic Geometry, Discrete Math), professional (high-level) mathematical subjects (e.g., Stochastic Processes, Mathematical Methods and Models in Economics), as well as several subjects related to programming and information technology (e.g., Databases, Application Software). This cluster is characterized by high parameters of the quality of teaching (variables Teacher’s knowledge of the subject, Teaching skills, Impartial and fair assessment) and low parameters of shortcomings in course arrangement (variables Lack of theory, Lack of practice, Shortage of textbooks). However, it should be noted that the process of teaching the courses included into the cluster should also be improved. For example, the value of Students’ interest in the subject matter (77.80) is lower than the ‘good’ rate by a few points. Thus, clustering analysis demonstrates two ends of the spectrum of the Applied Mathematics programme: Programming (Cluster 1) and Pure Mathematics (Cluster 4), which allows determining directions for teaching improvements. Cluster composition leads to the conclusion that course level and academic discipline had no impact on student evaluation of teaching. Clusters 1, 3 and 4, for example, include both low-level courses and high-level courses. Clusters 2 and 3 include Humanities courses, Social Science courses, Mathematics courses and Science courses. Furthermore, subject matter difficulty and course workload also had no impact on student ratings. Instructors of the most difficult and time-consuming courses in pure Mathematics (cluster 4), for example, had the highest student ratings. This leads to the conclusion that survey results may be regarded as robust. Thus, using statistical methods to carry out analysis of survey results allowed the factors influencing the results of studying to be specified and led to the formulation of strategies to improve the educational process. It is no coincidence that many researchers (e.g., Cathcart et al., 2014; Golding & Adam, 2016; Marsh & Roche, 1997; McKeachie, 1997) agree that it is important not only to conduct a survey but also to use its results correctly. 4. Conclusion Higher education reform has been implemented in Russia recently. In these circumstances, it is important to know the opinion of students so that changes could have an impact on improvement and development. Lack of time-proven tools and experience in conducting surveys and studies on this subject make it difficult to solve the problem. However, our survey showed that feedback results and their analysis by means of statistical techniques can help to detect problematic issues in the learning process and specify needs and expectations of students. The article presents research of the results of a survey, in which 66 students assessed 34 academic disciplines that make up the educational program in Applied Mathematics. In the process of assessment, the students used a 100-point scale familiar to them. The questionnaire included 10 parameters describing the format of the course of study, the adequacy of educational materials provided for students, the quality of teaching, specific features of the content of the educational discipline and learning outcomes. Such parameters as dissatisfaction with the process of learning (Need for a change) and the Students’ level of knowledge were selected as indicators of learning outcomes. To study the results of the survey, correlation, factor, regression and cluster analyses were used. The results of the correlation analysis in many ways coincide with the results obtained by international scientists: by assessing the course, students assess, first of all, teaching skills and then—specific features of the subject matter. As a result of the factor analysis of survey data the following three factors were identified: shortcomings in course arrangement and gaps in teaching skills, favourable moral atmosphere at the classroom and the intrinsic difficulty of the subject. The total share of explained variance is 89%. Construction of regression models showed that the extracted factors explain 90% of the variance of the variable Need for a change and 59% of the variance of the variable Students’ level of knowledge. Factor 1—shortcomings in course arrangement and gaps in teaching skills—has the greatest impact on the dissatisfaction parameter Need for a change and Factor 2—favourable moral climate and creative atmosphere in the classroom—has the greatest impact on the quality of the education parameter Students’ level of knowledge. As a result of the cluster analysis, we could clearly identify objects requiring control action (‘severe problem’, ‘problem’ and ‘difficult’ courses) and develop methods for improvement for each one. To summarize the above, we can say that, in general, students are satisfied with the educational programme. At the same time, student survey data helped us understand the essence of our teaching problems and identify ways to improve the teaching process. Elena Kuznetsova is an Associate Professor of the Department of Applied Mathematics at the Lipetsk State Technical University, Russia. She graduated from Lomonosov Moscow State University, the Faculty of Computational Mathematics and Cybernetics and received a PhD in differential equations from Voronezh State University. Her recent research interests are in teaching mathematics and its applications and training of undergraduate students majoring in applied mathematics. References Abrami , P. C. , Leventhal , L. & Perry , R. P. ( 1982 ) Educational seduction . Rev. Educ. Res. , 52 , 446 -- 464 . Google Scholar CrossRef Search ADS D’Apollonia , S. & Abrami , P. C. ( 1997 ) Navigating student ratings of instruction . Am. Psychol. , 52 , 1198 -- 1208 . Google Scholar CrossRef Search ADS Becker , B. E. & Cardy , R. L. ( 1986 ) Influence of halo error on appraisal effectiveness: a conceptual and empirical reconsideration . J. Appl. Psychol. , 71 , 662 -- 671 . Google Scholar CrossRef Search ADS Benton , S. L. , Duchon , D. & Pallett , W. H. ( 2013 ) Validity of student self-reported ratings of learning . Assess. Eval. High. Educ. , 38 , 377 -- 388 . Google Scholar CrossRef Search ADS Benton , S. L. & Cashin , W. E. ( 2014 ) Student ratings of instruction in college and university courses . Higher Education: Handbook of Theory and Research (M. B. Paulsen ed.), vol. 29 . Dordrecht, the Netherlands : Springer , pp. 279 -- 326 . Butler , S. & Roesel , K. ( 1989 ) The influence of dress on students’ perceptions of teacher characteristics . Clothing Textiles Res. J ., 7 , 57 -- 59 . Google Scholar CrossRef Search ADS Cathcart , A. , Greer , D. & Neale , L. ( 2014 ) Learner-focused evaluation cycles: facilitating learning using feedforward, concurrent and feedback evaluation . Assess. Eval. High. Educ. , 39 , 790 -- 802 . Google Scholar CrossRef Search ADS Chen , Y. & Hoshower , L. B. ( 2003 ) Student evaluation of teaching effectiveness: an assessment of student perception and motivation . Assess. Eval. High. Educ. , 28 , 71 -- 88 . Google Scholar CrossRef Search ADS Chowdhary , U . ( 1988 ) Instructor’s attire as a biasing factor in students’ ratings of an instructor . Clothing Textiles Res. J. , 6 , 17 -- 22 . Google Scholar CrossRef Search ADS Coffey , M. & Gibbs , G. ( 2001 ) The evaluation of the Student Evaluation of Educational Quality Questionnaire (SEEQ) in UK higher education . Assess. Eval. High. Educ. , 26 , 89 -- 93 . Google Scholar CrossRef Search ADS Crumbley , L. , Henry , B. K. & Kratchman , S. H. ( 2001 ) Students’ perceptions of the evaluation of college teaching . Qual. Assur. Educ., 9 , 197 -- 207 . Google Scholar CrossRef Search ADS Curtis , D. D. & Keeves , J. P. ( 2000 ) The Course Experience Questionnaire as an institutional performance indicator . Int. Educ. J. , 1 , 73 -- 82 . Darwin , S. ( 2016 ) The emergence of student evaluation in higher education . Student Evaluation in Higher Education (S. Darwin ed.). Switzerland : Springer International Publishing , pp. 1 -- 11 . Elliott , K. M. & Shin , D. ( 2002 ) Student satisfaction: an alternative approach to assessing this important concept . J. High. Educ. Policy Manag. , 24 , 197 -- 209 . Google Scholar CrossRef Search ADS Gibson , A. ( 2010 ) Measuring business student satisfaction: a review and summary of the major predictors . J. High. Educ. Policy Manag. , 32 , 251 -- 259 . Google Scholar CrossRef Search ADS Golding , C. & Adam , L. ( 2016 ) Evaluate to improve: useful approaches to student evaluation. Assess. Eval. High. Educ. , 41 , 1 -- 14 . Google Scholar CrossRef Search ADS Gorbatenko , A. S. ( 1990 ) Anketa éprepodavatel’ glazami studentové glazami social’nogo psihologa—prepodavatelja vuza [About the questionnaire ‘the teacher through the eyes of students’ through the eyes of a social psychologist, teacher of high school.] . Vop. Psikhol+ , 1 , 184 -- 186 . Russian . Grace , D. , Weaven , S. , Bodey , K. , Ross , M. & Weaven , K. ( 2012 ) Putting student evaluations into perspective: the course experience quality and satisfaction model (CEQS) . Stud. Educ. Eval. , 38 , 35 -- 43 . Google Scholar CrossRef Search ADS Greenwald , A. G. & Gillmore , G. M. ( 1997 ) Grading leniency is a removable contaminant of student ratings . Am. Psychol., 52 , 1209 -- 1217 . Google Scholar CrossRef Search ADS Griffin , P. , Coates , H. , Mcinnis , C. & James , R. ( 2003 ) The development of an extended course experience questionnaire . Qual. High. Educ. , 9 , 259 -- 266 . Google Scholar CrossRef Search ADS Hackett , G. & Betz , N. E. ( 1989 ) An exploration of the mathematics self-efficacy/mathematics performance correspondence . J. Res. Math. Educ. , 20 , 261 -- 273 . Google Scholar CrossRef Search ADS Kulik , J. A. ( 2001 ) Student ratings: validity, utility, and controversy . New Direct. Inst. Res. , 109 , 9 -- 25 . Google Scholar CrossRef Search ADS Kuznetsova , E. V. ( 2014 ) Formirovanie ponjatij v obuchenii stohastike [Formation of concepts in teaching stochastics] . Innovacii v obrazovanii , 7 , 20 -- 29 . Russian. Levchenko , E. V. ( 1990 ) O psihologicheskih problemah, voznikajushhih pri provedenii oprosa ‘Prepodavatel’ glazami studenta’ [On the psychological problems encountered in the survey ‘Teacher through the eyes of students.’] . Vop. Psikhol+ , 6 , 181 -- 182 . Russian . Mann , E. L. ( 2006 ) Creativity: the essence of mathematics . J. Educ. Gifted , 30 , 236 -- 260 . Google Scholar CrossRef Search ADS Marsh , H. W. ( 1982 ) SEEQ: a reliable, valid, and useful instrument for collecting students’ evaluations of university teaching . Br. J. Educ. Psychol. , 52 , 77 -- 95 . Google Scholar CrossRef Search ADS Marsh , H. W. & Roche , L. A. ( 1997 ) Making students’ evaluations of teaching effectiveness effective: the critical issues of validity, bias, and utility . Am. Psychol. , 52 , 1187 -- 1197 . Google Scholar CrossRef Search ADS McKeachie , W. J. ( 1997 ) Student ratings: the validity of use . Am. Psychol. , 52 , 1218 -- 1225 . Google Scholar CrossRef Search ADS Middleton , J. A. & Spanias , P. A. ( 1999 ) Motivation for achievement in mathematics: findings, generalizations, and criticisms of the research . J. Res. Math. Educ. , 30 , 65 -- 88 . Google Scholar CrossRef Search ADS Orsini , J. L. ( 1988 ) Halo effects in student evaluations of faculty: a case application . J. Mark. Educ., 10 , 38 -- 45 . Google Scholar CrossRef Search ADS Peer , E. & Babad , E. ( 2014 ) The Doctor Fox research (1973) re-revisited: ‘educational seduction’ ruled out . J. Educ. Psychol. , 106 , 36 -- 45 . Google Scholar CrossRef Search ADS Ramsden , P . ( 1991 ) A performance indicator of teaching quality in higher education: The Course Experience Questionnaire . Stud. High. Educ. , 16 , 129 -- 150 . Google Scholar CrossRef Search ADS Remedios , R. & Lieberman , D. A. ( 2008 ) I liked your course because you taught me well: the influence of grades, workload, expectations and goals on students’ evaluations of teaching . Br. Educ. Res. J. , 34 , 91 -- 115 . Google Scholar CrossRef Search ADS Richardson , J. T. ( 1994 ) A British evaluation of the course experience questionnaire . Stud. High. Educ. , 19 , 59 -- 68 . Google Scholar CrossRef Search ADS Siegle , D. & McCoach , D. B. ( 2007 ) Increasing student mathematics self-efficacy through teacher training . J. Adv. Acad. , 18 , 278 -- 312 . Spencer , K. J. & Schmelkin , L. P. ( 2002 ) Student perspectives on teaching and its evaluation . Assess. Eval. High. Educ. , 27 , 397 -- 409 . Google Scholar CrossRef Search ADS Spooren , P. & Mortelmans , D. ( 2006 ) Teacher professionalism and student evaluation of teaching: will better teachers receive higher ratings and will better students give higher ratings? Educ. Stud. , 32 , 201 -- 214 . Google Scholar CrossRef Search ADS Theall , M. & Franklin , J. ( 2001 ) Looking for bias in all the wrong places: a search for truth or a witch hunt in student ratings of instruction? New Direct. Inst. Res. , 109 , 45 -- 56 . Google Scholar CrossRef Search ADS Ware Jr , J. E. & Williams , R. G. ( 1975 ) The Dr. Fox effect: a study of lecturer effectiveness and ratings of instruction . Acad. Med. , 50 , 149 -- 56 . Google Scholar CrossRef Search ADS Wilson , K. L. , Lizzio , A. & Ramsden , P. ( 1997 ) The development, validation and application of the Course Experience Questionnaire . Stud. High. Educ. , 22 , 33 -- 53 . Google Scholar CrossRef Search ADS Zapesotsky , A. S. ( 2007 ) Prepodavatel’ glazami studenta. Ob izuchenii mnenij studentov o kachestve pedagogicheskoj dejatel’nosti prepodavatelja [Teacher by student’s eyes. On the study of the views of students on the quality of pedagogical activity of the teacher] . Vysshee obrazovanie segodnja , 9 , 28 -- 32 . Russian . Zelenev , I. R. & Tumanov , S. V. ( 2012 ). Ob ocenke kachestva prepodavanija v vuze v kontekste vosprijatija studentami svoih prepodavatelej [An estimate of the quality of teaching at the university in the context of the perception of the students of their teachers] . Vysshee obrazovanie v Rossii , 11 , 99 -- 105 . Russian . Zelentsov , B. ( 1999 ). Studenty o prepodavatele: metodika oprosa [Students’ assessment of teachers: a survey methodology] . Vysshee obrazovaniev Rossii , 6 , 44 -- 47 . Russian . © The Author(s) 2018. Published by Oxford University Press on behalf of The Institute of Mathematics and its Applications. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com

Journal

Teaching Mathematics and Its Applications: International Journal of the IMAOxford University Press

Published: Mar 21, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off