Testing the general model of productivity

August 5th, 2009 by james

In a previous episode, I suggested that productivity is really just an efficiency measure. Since the working currency for academics is arguably prestige, productive researchers are those that can acquire the most prestige for the least effort and this can be formally written as:

productivity = sum over all t for outputs over inputs

where each task t is assigned a prestige benefit (pt per activity × n activities) and an effort cost (attention units per hour at × ht number of hours).

The comments on the original post suggested that there was a lot of enthusiasm for implementing and testing the theory and so I’ve spent the past month gathering data and preparing for a bit of an empirical assessment. The results are a work-in-progress but I hope to keep the conversation going and get your feedback. Here then is a step-by-step guide to how I’ve analysed my productivity over the last month using the general model.

  1. Data collection

    I started by logging all of my work activities into a comma-separated file with three columns: the date, a description of the task, and the amount of time spent on that task. At first this was a pain but after a while, I got in the habit of opening the log file each morning and adding the data. I didn’t worry too much about data normalization at the time and recorded information at 15 minute intervals.

  2. Data preparation

    Once the data was collected, I needed to tidy it up a bit. A quick inspection of my data file showed that I had used 83 distinct activity types but this was impractical for Step 3 below. Therefore I reduced these activities to 16 categories at first, then 8, and finally 6 categories. These included:

    • Administrata (email, filing expenses, preparing for conferences, sorting out IT problems etc.)
    • Internal meetings (with students, supervisors, project members)
    • Internal writing (conference summaries, minutes, project reports etc)
    • External writing (conference and journal papers, preparing external presentations)
    • Networking (participating at conferences and external events)
    • Research (reading, programming, data analysis)

    Non-work activities and work-related travel were excluded from the analysis.

  3. Calculate productivity costs and benefits

    The next step was to calculate i) the amount of prestige gained (pt) and ii) effort require to perform (at) each activity. In other words, I needed to develop quantitative measures that could distinguish between those activities that take a lot of hard work and yield big rewards and those trivial tasks that need to be done on a daily basis.

    To do this, I used the analytic hierarchy process. First, I coded an R function that performs an AHP analysis for a set of input factors (the code’s at the end of this post). AHP is usually done as a “hierarchy”, i.e. comparing options against different criteria and then successively aggregating the results into an overall score. However I did two separate analyses as a way of developing normalized scores for the prestige benefit and attention cost of each activity.

    For example, let’s consider the question of prestige. Each of the 6 categories defined above has different prestige measures that could be used such as citations for a journal publication. However comparing or aggregating these “native” measures for different activities is difficult and contentious; AHP instead rephrases the question and lets you work out in a rough sense what activities are the most prestigious.

    The function I wrote takes the set of categories as input and then asks you to perform pairwise comparisons. Categories are compared on a reciprocal 9-point scale where 1 means options A and B equally preferred and 9 means A is extremely preferred to B (and if A vs. B = 9, then B vs. A = 1/9). As I noted above, I had to reduce the number of categories because this routine requires n*(n-1)/2 comparisons. So 83 categories would mean 3403 comparisons and 6 requires only 15.

    The result is a series of weights for each category and a consistency ratio. The weights indicate the relative importance of each activity category and the consistency ratio indicates the extent to which the judgments were consistent. In other words, if you say that apples are tastier than pears and pears tastier than oranges, but oranges tastier than apples, then your results are inconsistent and the weights can’t be trusted. Ideally the consistency ratio should be less than about 10%. In my first attempt at this, I had a consistency ratio of 0.15 which is why I further reduced the number of categories to 6.

    The table below shows the results of the analyses. I’ve used the prestige weights directly from the AHP analysis; that is, the weights add up to one but don’t reflect any real units. However I’ve normalised the effort weights so that the largest weight represents one hour of maximum concentration. By taking the ratio of prestige to effort, we can work out which activity is most productive on a per-hour basis. Perhaps not surprisingly, networking is twice as productive as writing a paper or attending internal meetings.

    table

    However there is some path dependence hidden in here: research may have a very low productivity ratio but clearly the more productive activities must be supported by good quality research results. One can’t happen without the other. This also explains why it’s easier for professors to accumulate prestige: the hard work is often done by grad students and researchers while the authorship and networking opportunities come more readily once your name is established.

  4. Calculating the productivity index

    These weights can now be used to calculate the productivity of each day. Using the equation above, I’ve assumed that nt is 1 for each entry in the database and then calculated the total productivity index for the day. The figure below shows how the productivity scores varies over time, with a three day moving average. Interestingly the low point during late June and early July coincided with a conference when I was doing a lot of traveling (the spike on 29 June represent the day I was presented a paper.)

    prod_blog-004

    The data can also be used to examine work patterns. The first figure shows the average amount of time worked on each day of the week, not including meals, tea breaks etc. Out of interest, I typically commute to work on Fridays (and during this month, Wednesdays). This is nearly 3 hours a day on trains which, unless I’m reading work material, is “wasted” time (I do get through a lot of novels).

    prod_blog-005

    The second figure shows how this corresponds to productivity. Again because I only have a couple days per week in the office, these tend to be my “necessary evil” days. Hence Wednesdays and Fridays score lower because I’m often busy with student supervisions and other administrative matters. And clearly, taking the weekends off helps to make Mondays more productive.

    prod_blog-006

There are other analyses that I could perform with the data, for example breaking up productivity by task, so please feel free to add your suggestions to the comments. But I wanted to end by flagging up a couple issues that struck me while doing this:

  • Simply recording your productivity increases productivity; it’s a question of reflective learning and feedback. Several times I caught myself writing “Surfed web” in the log file and then duly chastened, spent the next few hours knuckling down to do some hard work.
  • The key to all of this is data collection so you need to find a system that works for you. The flat text file I’ve used is not a bad way of doing it but the excellent Flowing Data blog suggests that a person might use a private Twitter feed to record such information. Could be an interesting experiment.
  • When performing the AHP analysis, it’s extremely difficult to evaluate certain activities. What is the prestige value of networking for example? If it contributes immediately to a grant proposal (e.g. in a funding workshop), then that obviously has a benefit. But if it’s simply meeting people, it may be a long time before any measurable prestige comes out of it. (Which of course is not to say that you should only meet people when you get something immediate out of it!)
  • Work-related travel is a big drain on productivity, but a necessary evil. Although it’s obvious, using this time for reading is by far the best use of your time. All those research activities with low productivity ratios have to happen at some point and sometimes the office is too distracting.

I’m going to keep recording my productivity data and perhaps once I’ve got more data, I’ll be able to tease out some larger trends. But for the moment, feel free to try out the method and add your comments below.

The code
Here’s the R code I’ve used to perform the AHP analysis (AHP.r). It can be called using something like:

ahp <- AHP(c("Apples","Oranges","Pears"))

It will ask you compare all of the categories: answer using 1,3,5,7,9 or 1/3,1/5,1/7,1/9. Then to get the consistency ratio, type ahp$cr and to get the weights, ahp$weight

If you enjoyed this post, make sure you !


6 Responses to “Testing the general model of productivity”

  1. Ioana Says:

    Nice job. I couldn’t spend the time recording all this… But there is a nifty free online program that does (most of) this for you: rescuetime.com. It allows you to log all your time on the computer and to give productivity ratings to the various activities (e.g. email, Word, Matlab, whatever).

  2. Benjamin DeschampsNo Gravatar Says:

    I really like your use of pairwise comparisons / AHP, good way of getting objective values.

    The main thing I notice is that the formula is that it is difficult to establish which is more productive between low presige, low effort tasks and high prestige, high effort tasks (compare external writing with meetings, for example). Meetings do not gain you much prestige, but come out as being productive because they require little effort…

    Take the two following tasks:

    Task 1: Prestige = 0.1, Effort = 0.1 for 1 hour
    Task 2: Prestige = 0.9, Effort = 0.9 for 1 hour

    Both would yield the same productivity score according to the formula, if I interpret it correctly; however, task 2 is much more productive.

    I guess it all comes down to your philosophy: are you aiming to have the most prestige for the least effort, or simply the highest productivity (quality output, research, writing, etc, regardless of effort)?

    A final thought is that I find it much easier to conceptualize the meaning of scores and to compare productivity values if they are not decimals (because of how they divide/multiply). Might be easier if you went on a scale of 0-100 instead?

  3. Jose Says:

    Interesting, this is what I got:
    categories <- c("paper-writing","coding-an-experiment","collaborator-meeting", "business-meeting", "analyze-data ", "drafting", "grant-paperwork", "meet-with-students")

    ahp <- AHP(categories)
    res = data.frame(categories=categories, weight=ahp$weight)
    (res = res[order(res$weight, decreasing=T),])

    categories weight
    6 drafting 0.324
    7 grant-paperwork 0.187
    8 meet-with-students 0.160
    5 analyze-data 0.139
    1 paper-writing 0.055
    2 coding-an-experiment 0.055
    3 collaborator-meeting 0.053
    4 business-meeting 0.028

    I guess drafting for me is the hard, intellectual part and paper-writing is all the pain that it is surrounded (ref hunting, formatting, integrating collaborator comments, reviewer comments, etc).

    Business meeting is the kind of dreaded meeting that split your day and contributes little to anything. :)

  4. sample graduate school essayNo Gravatar Says:

    You exclude travelling from the equation, but doesn’t that have a lot to do with prestige too? In most cases, only the most trusted or respected get chosen to travel to conventions, summits, etc. which are relevant to their fields. Doesn’t being invited to speak as a guest speaker count as one factor for prestige?

  5. Benjamin DeschampsNo Gravatar Says:

    This might be useful for your data collection…

    http://flowingdata.com/2009/07/15/collect-data-about-yourself-with-twitter-your-flowingdata-is-live/

  6. Academic Productivity » A general model of productivity? Says:

    [...] There’s a follow-up post available with data testing this model. [...]

Leave a Reply