What can self-determined metrics look like in academia?
- bethann29
- Mar 25
- 6 min read
Updated: 3 days ago
I am a deliberately unconventional academic, but I still do some bean counting.

For the past few weeks, I’ve been thinking aloud about the importance of articulating what matters to me and how that informs my unique work in academia. I’ve shared the tools that helped me find words to describe the value of my work. And I’ve pondered how important it is to do this (and why academics still, mostly, don’t).
The crux of the whole idea is:
how do I actually take my own goals and self-affirmation as my starting point [1],
do work that is guided by and accountable to me,
and then frame that work in ways that make it legible to people who uphold the prestige paradigms of academia?
This all can feel pretty abstract and fairly hard to do. So today, I’m sharing concrete examples of how I do this.
What does a set of self-established metrics really look like?
Think of it like a systematic literature review. Articulating a mission statement and your values helps you establish inclusion and exclusion criteria for your “search.” Since my goal is to “enhance ethical leadership capacity in science communication, science, and academia,” I need to model and provide resources/trainings to help people develop this capacity. To gauge efficacy, I must assess whether said resources/trainings work. So, one of my metrics is whether I am designing my trainings to, cumulatively, function as research projects.
Another metric is how accessible my trainings are to people, regardless of their socioeconomic status. For example, all the programming and resources offered through the UW Science Communication Initiative I direct are available for free. Most resources I make and post online are also free. I also hope that my work is useful and accessible beyond the scope of the institutions where I work and volunteer. So, a related metric is whether people I don’t personally know are finding and using my work.
I also want to build capacity, not just verify whether my interventions work. So, one of my metrics is how well scaffolded my training efforts are. Do they help people build from foundational to more advanced skills? Do I offer only entry-level trainings, or also trainings and resources for people who are more experienced in scicomm and leadership? Another dimension of capacity-building is recognizing that we aren’t superheroes [2] and all efforts to enhance academia and society are going to take teamwork across many disciplines. So, one of my metrics is how fully collaborative/co-produced [3] my work is. Another is how broadly my work draws on expertise and knowledge bases beyond my own.
All of these metrics relate to another one: academia has incredible resources, and we need to use them generously rather than hoard them. So, I also gauge my success by how much my work helps to demystify the hidden curriculum of academia and to what degree my professional practices boost and sponsor other people’s career development and professional networks.
I also see communication and leadership skills as the mechanisms by which we make science degrees themselves into transferable skills. Thus, another metric of mine is how genuinely actionable my courses, trainings, resources, and publications are.
Great, sounds nice. But…howwww do we actually make those sorts of metrics legible?
I find you can think about mapping your work (based in your values and self-determined metrics) in two ways. You can start with your metrics and work towards making them legible. Or, you can just do the work you care about, and then “reverse engineer” your framing of it to make it legible. (I do some of both.)
Either way, the inescapable reality is this: if we signed up to be academics, we signed up for academia as it is now, not academia as we hope it will someday be. So, we are making the choice to work in a system in which most gatekeepers have a fairly clear (if narrow) sense of what “counts.” (And hopefully, we’re also making the choice to work to make this system better!)
In any case, in the table below, you’ll see how I think about some [4] of my self-determined metrics in relation to (a) academic “beans”[5] and (b) actual products and outcomes that can be mutually legible.
As you can see, I haven’t highlighted work that isn’t legible. For example, I don’t report the poetry I write, or the sketches, collages, and pottery I make. Perhaps not reporting those seems obvious; they’re not part of my academic job. But I also do not report this newsletter/blog, and I do not report the scicomm podcast I run with Virginia Schutte, even though we have talked a lot about self-determined metrics on that podcast. While the blog and podcast are aligned with my work, they are not activities that are valued by people who prioritize the academic prestige paradigm. [6] They are also not in my job description. I don’t receive any support or compensation from my institution for doing them. So, I don’t represent them as part of my job. Doing so frees me from trying to convince people those activities matter. It also gives me a lot more autonomy to do them in whatever way I want. They are hobbies, not my job. I do, however, keep track of the outcomes from them, using metrics similar to those I’ve described in this post, but adding in the personal metrics of processing my thinking, having fun, and doing things with friends. Sadly, these are all fairly meaningless metrics in academia, but it doesn’t matter—I don’t report these outcomes at work. [7]
What I’ve outlined here is not a comprehensive accounting of my self-determined metrics. Nor is it an exhaustive outline of how I map my work back onto the beans that count in academia. [8] But hopefully, the examples I’ve provided help you see more concretely how it is possible to work from your own metrics while making your work academically legible.
How does this scale up to my department or institution’s metrics?
Right now, I’m leading an effort to conduct a service audit and drafting a service policy and accountability procedures for my department. I’m also co-leading a review of our tenure, review, and promotion expectations, which will (ideally) integrate and reinforce our service policy. So, I’ll wait until we’re a little further along to discuss in detail how you can “take this show on the road,” so to speak.
But, the short story is that you can actually make space for a lot of change by just changing how you operate. Lead with your own values-informed metrics. Vote to support, hire, and promote people who do the same. Disagree out loud when people question whether that kind of work, that sort of framing, is “valid” or sufficiently “prestigious.” Small shifts like these can add up.
How about you?
Do you currently have self-determined metrics for gauging your success in academia? How do you articulate, to your peers and administrators, that you are achieving these metrics, and that doing so is helping meet review and promotion expectations and contributing to the mission of your institution?
[1] As I’ve mentioned many many times, I’m forever indebted and grateful to Dr. Beronda Montgomery for her framing of this approach, which set me on a multi-year path to self-determined metrics that start in self-affirmation of the value of me and my work in academia.
[2] My favorite take on this is one I’ve mentioned before: Deepa Iyer’s Social Change Now workbook can help you identify roles you want to/are good at playing, and also the roles that you should leave space for others to do. (Doing them all, or trying, leads to burnout and megalomania; both are toxic and avoidable, in this particular case at least.)
[3] Note that, like scicomm, collaborative and co-produced work can often be perceived as less prestigious than work produced alone (the lone genius myth at play).
[4] Yes, just some. I have a lot of self-defined metrics, and a sampling of them should be plenty-useful for you articulating your own. :)
[5] In my department, the minimum expectations for tenure include (but are not limited to) averaging publication of 2 peer-reviewed papers per year over the pre-tenure period (and it’s assumed that those should not all be middle/minor author papers). Plus, you should actively be trying to get grant funding to complement the 5-6-digit start-up you likely received. These activities plus others are lumped in your ~55-65% research allocation in your job description. Because the research percentage in my job description (and commensurate resourcing, which is/was zero) is much lower (~17-20%), my expectation is 0.5 papers/year and no grant effort. But, everyone who evaluates me is tenure-track or tenured. And, anyone who evaluates me outside my department would likewise be used to the standard, tenure-track expectations. Thus, I aim to meet or exceed what my colleagues must do. This is inequitable but realistic.
[6] See my recent paper for a thorough intro to the concept of the academic prestige paradigm and how much it resists, devalues, and impedes scicomm and people who do/study/teach scicomm.
[7] I do report them publicly, though, when relevant. For example, listen to the “Here comes success (oh sh*t!)” episode of the podcast for a detailed analysis of another side project, the Scicomm STEP program I built with Virginia.
[8] See note 2.
Comments