I started tracking my time at work in detail at the start of my postdoc in 2018, using the amazing app ‘Timeular’. This series of stories provides some insights into postdoc life using that data.
In a postdoc, not everything goes according to plan. In fact, it’s more likely that they won’t! It’s important to realize that that is totally ok. But it’s important to understand that this is perfectly normal and an inherent part of the scientific process. We can try new things, experiment, make changes, backtrack, change direction. We can fail!
I know he academic climate of recent decades has not been forgiving of failure, but it’s a crucial and valuable aspect of scientific discovery. After all, if success was guaranteed, would it still be considered true science?
Through my time tracking practices, I now have a clear understanding of the amount of time I have spent on ‘failed’ projects – those that I invested hours into but never got published. Note that this is of course just one type of failure and does not include rejected proposals, failed experiments with published negative results, or the countless micro-failures encountered along the way. To give a visual representation, I created a plot of the 35 most time-consuming papers of my PhD.

Good news first: the largest chunk of my time went towards a paper that actually got published! This is our Global maps of soil temperature-paper, a massive effort with a looooong co-author list. However, it’s evident that a significant portion of my time was also dedicated to projects that didn’t see the same success. The list of 35 time-consumers includes seven (7!) cancelled papers and several others that are yet to be published.
One particular project that deserves mention is the cancelled paper ranked second in terms of time invested. It was an ambitious attempt to apply the theoretical concept of higher-order interactions (HOIs) to real-world data. At the time, HOIs were and still are a hot topic, but modeling them correctly proved to be challenging. Most prior attempts were limited to experimental communities, petri dishes, or simple models.
I started modelling on my messy real-world community data (in parallel to a theoretical ecologists who was making data-free models of the same), and solved roadblock after roadblock. Our goal was to confirm the theory through real-world findings, and we worked tirelessly to overcome each obstacle. Despite these efforts, the limitations of the data became increasingly apparent and the impact of methodological decisions was more noticeable. Nevertheless, we completed a manuscript with a story we were confident about and submitted it to our first high-impact journal.
However, the manuscript faced rejection after rejection, luckily with constructive feedback from reviewers. We took their comments into account and worked to make the manuscript better and clearer, highlighting more and more of the unearthed methodological limitations up front. Despite these efforts, the limitations became increasingly difficult to ignore and started to overshadow the findings that I first had been so proud of. After several rejections, we ultimately made the difficult decision to cancel the project, recognizing that we couldn’t get a good grip on higher-order interactions (HOIs) with this messy data. It was a failure.
The list contains other failures.
The list of failures also includes valuable learning experiences. For instance, there were some engaging master’s theses that I invested a lot of time in, but the students had to leave before the manuscript was finished and I no longer had the capacity or expertise to complete them. Another paper that I was enthusiastic about got overtaken by a new one with a more sophisticated methodology, while a dataset that didn’t fit the original research question eventually found use for a different purpose. I also invested a lot of effort into some analyses, only to discover that they would require even more work and were not aligned with what I had promised to the funders.
Although these projects may not have ended in publication, they are not true failures. I learned a lot from each of them and some of that knowledge has, and will continue to, inform my future work in different ways. While the “publish or perish” mentality is prevalent in the scientific community, I firmly believe that the real value of science lies in the learning. Publishing is a great way to share your learning with others, but not all learning has to be public. Personal growth as a scientist and a person is equally important and even if it’s not reflected on your CV, it will benefit you in the long run. I hope this story will encourage you to embrace failures in your own scientific journey.