Are qualifying exams a waste of time?
How many times have you heard grad students express concern over qualifying exams or declare that they “survived” it? Qualifying exams (“quals”) can be a grueling process spanning anywhere between 1 to 2 years involving multiple examinations. The effectiveness of such exams depends on specific examination structure and methodology but I believe this post should resonate with many graduates.
What are qualifying exams and how does one qualify?
Qualifying exams are used as a determinant of a student’s ability to think coherently and of their knowledge in a research field. It involves a number of courses and oral examinations that test the student on a department’s field of study, and a student’s research area.
How much time is spent preparing?
While students can perform a limited amount of research part-time during the first two years, the majority of time for most people is spent on courses. From 2018’s subject reviews, the average first year student across all specializations in my department (NSE) spent roughly 33 hours per week on coursework alone. In one field, the average time spent was as high as 42 hours per week in the first year. In addition to this, 1-3 months of focused preparation is typically spent prior to the oral examination in the fourth semester.
What do you learn from taking such an exam?
While I learned a broad array of subjects, the majority of knowledge gained was completely inconsequential to my research. I found the knowledge valuable and interesting at times but I do not believe that it “qualified” me in any way. It was only after I finished quals when I was able to take courses that were legitimately useful and courses that I had genuine interest in. This experience with quals is extremely common.
The problems with qualifying exams
The main problem with most qualifying exams is the rigidity and length of the process. Substantial amount of time is wasted focusing on grades and performing in domains that are largely irrelevant to growth and development. Even in cases where the learnings prove beneficial, requiring these learnings be crammed in before you can be considered a glorified “PhD candidate” effectively amounts to little more than academic hazing.
In cases where someone decides to leave the PhD program after taking the exams, they must spend additional time (often +1 or 2 semesters) in order to conduct sufficient research to complete a master’s degree. Reflect clearly on this point: the work and effort put in to qualify for a PhD, is not anywhere near sufficient for the completion of a masters. This is because the requirements often serve as more of a distraction than an enabler of meaningful research.
Are there viable alternatives?
The main goal of the PhD in the sciences is to conduct research and the purpose of the qualifying exam is to demonstrate research ability. Thus, this is where the locus of attention should be placed. Auxiliary learning objectives can be met by setting degree requirements (any time before graduation) rather than through a qualifying process. I believe that an intelligent qualifying process accounts for and encourages research-oriented deliverables such as proposals, plans, experiments, papers, conferences, reports and more. Required coursework could be sharpened to include only subjects that are immediately applicable or necessary.
More radically, you could even envision a PhD absent qualifying exams altogether. Imagine determining if someone could do research, by having them do research. Are we really to believe that professors, advisors and senior scientists, after more than a year of working with a student, would be unable to gauge a student’s research capabilities? That, if it weren’t for qualifying exams, they would simply have no idea whether or not students were capable? I don’t believe so. In fact, I believe that even now, the vast majority of students are toiling over exams where it is practically known whether they should pass or fail.
Image credit: PHD Comics
Share this post: