Lethbridge’s Covid-19 R number

I teach physical chemistry at the University of Lethbridge. I even wrote a textbook that we use in my class. The course includes a module on chemical kinetics, but as I explain to the students, kinetics shows up in a lot of places. With the Covid-19 pandemic being top of mind for everyone this year, and given that it’s a fairly straightforward extension to material I already teach in this class, I decided to teach the students how to compute those R numbers we keep hearing about in the news. The calculation is fairly easy (if you know a bit of kinetics), so as a public service, I’m going to be calculating weekly R numbers for Lethbridge and posting them here.

The R number is an estimate of how many new infections we are seeing for each infected individual, on average. Thus, an R number above 1 means that the number of infections is growing. An R number below 1 means that the number of infections is shrinking. Of course, we can just look at the daily case counts to get this information, but R gives you one simple number to look at, and moreover the calculation method has the effect of smoothing out the day-to-day fluctuations in case counts.

The method I initially used to calculate R was crude. The method had one free parameter, namely the average period of time than an individual is infectious. This parameter has considerable uncertainty, and it depends on behavior. For example, a person no longer counts as “infectious” if they are self-isolating. In order to estimate this parameter, I used provincial values for the number of active cases along with the province’s estimate of R to calculate an effective infectious period for each week from March 15 to April 30, inclusive. The mean infectious period calculated from these data was 3.8±2.3 days. To my surprise, this value is very low compared to the biological infectious period of about two weeks. But there it is. The low value suggests that most people are doing the right thing and staying away from other people when they think they might be infected.

Eventually, I decided to use an SIR model to calculate R. This has the advantage that I don’t need to use the provincial data to calibrate any of the parameters. It has the disadvantage that a realistic model for Covid-19 is much more complicated than the simple SIR model would have it, so there is some amount of what we call “modelling error” in the estimate. R values starting the week of May 3 were calculated from an SIR model.

One more brief note on SIR models: the R variable in an SIR model doesn’t distinguish between different ways of exiting the I class. Thus, R = recovered + dead. Because relatively few people die from Covid-19, the difference isn’t large, but it’s probably not insignificant.

Note that I will not be providing confidence intervals, which I feel would give an undeserved air of statistical certainty to these calculations. Finally note that these calculations are retrospective. They do not necessarily predict what will happen next week.

I will be updating this table on a weekly basis. I’m using the data published daily in the Lethbridge Herald, for a few of reasons, one of which is that it’s convenient, and the other being that it’s about the right amount of data to get a decent estimate of R. Because of the Herald’s publishing schedule, this also leaves out weekend data which are often off-trend and might cause some statistical difficulties otherwise. (In principle, leaving out the weekend data shouldn’t affect the R value, which is based on how fast case counts are growing, and not on exactly when we started counting or how long a stretch of data we use. As an analogy, think about your speed as you go down the highway, which you can get by dividing distance by time. As long as you keep a constant speed, it doesn’t matter exactly when you start or stop measuring time and distance travelled. However, if you wanted to calculate a typical speed of travel, you wouldn’t include a period of time when you took your foot off the gas. Similarly, lower testing rates over the weekends would result in data that I would have to throw out because they are off the weekly trend line. The Monday “catch up” data point sometimes has to be discarded because it is off the trend line in the other direction.)

I would finally note that R values calculated from January 2022 onward are somewhat suspect because of the restriction of testing to select groups as the omicron wave overwhelmed the Province’s testing capacity.

Hopefully some of you will find these local R values useful.

R values for Lethbridge (2021). Asterixes indicate points with larger uncertainties, sometimes due to statutory holidays reducing the number of available data points for the week, and sometimes to unusual scatter or other data anomalies.

In praise of the late H. T. Banks

H. Thomas Banks is one of those people I wish I had had a chance to meet. Unfortunately, he died December 31st of last year, so that won’t be happening. Given that I greatly admired his work, and on the assumption that some young scientists read this blog, I thought I would say a few words about some of Banks’ papers that I particularly enjoyed.

H. T. Banks, for those of you who may not have heard of him, was an outstanding applied mathematician. He had wide interests, but most interesting to me was his extensive work on delay-differential equations, given my own interest in the subject.

The first Banks paper I read was a 1978 joint paper with Joseph Mahaffy on the stability analysis of a Goodwin model. Looking for oscillations in gene expression models was a popular pastime in those days. In some ways, it still is. This paper stood out for me as a careful piece of mathematical argument showing that a certain class of models could not oscillate. The paper also contained a solid discussion of the biological relevance of the results. Discovering oscillations in a model may be fun for those of us who enjoy a good bifurcation diagram, but most gene expression networks probably evolved not to oscillate. How much of that lovely discussion was due to Banks, and how much to Mahaffy, I cannot say. But a lot of Banks’ work was just as careful about the relevance of the results to the real world.

Much more recently, Banks was involved in a lovely piece of mathematics laying down the foundations for sensitivity analysis of systems with delays, particularly for sensitivity with respect to the delays. Sensitivity analysis is a key technique in a lot of areas of modelling. The basic idea is to calculate a coefficient that tells us how sensitive the solution of a dynamical system is to a parameter. There are many variations on sensitivity analysis, which you can read about in a nice introductory paper by Brian Ingalls. The Banks paper provided a basis for doing this with respect to delays, and was a key foundation stone for our work work on this topic.

Some years ago, we developed a method for simulating stochastic systems with delays. Our intention was for this method to be used to model gene expression networks. I was therefore pleased and surprised when I discovered that Banks had used our algorithm to study a pork production logistics problem. That just shows what an applied mathematician with broad interests can do with a piece of science developed in another context. Banks and his colleagues went a bit further than just studying one model, looking a models with different treatments of the delays, and finding that these led to different statistical properties, which would of course be of great interest if you were trying to optimize a supply chain.

The few examples above show a real breadth of interests, both mathematically and in terms of applications. You can get an even better idea of how broad his interests were by scanning his list of publications. There are papers there on control theory, on HIV therapeutic strategies, on magnetohydrodynamics, on acoustics, … Something for just about every taste in applied mathematics. There is a place for specialists in science, but often it’s the people who straddle different areas who can make the most important contributions by connecting ideas from different fields. I think that Banks was a great example of a mathematician who cultivated breadth, and was therefore able to have a really broad impact.

So I’m really sorry I never got to meet H.T. Banks. I think I would have enjoyed knowing him.

(If you’re wondering why I’m so late with this blog post: I found out about Banks’ passing from an obituary in the June SIAM News, which because of the pandemic I didn’t get my hands on until about a month ago.)

50 years of Physical Review A

In the beginning, there was the Physical Review, and it was good. So good in fact that it soon started to grow exponentially. At an event celebrating the 100th anniversary of the Physical Review in 1993, one unnamed physicist quipped that “The theory of relativity states that nothing can expand faster than the speed of light, unless it conveys no information. This accounts for the astonishing expansion rate of The Physical Review” (New York Times, April 20, 1993). (At the risk of sounding like Sheldon Cooper, if this physics joke went over your head, this post is probably not for you.) As a result of the rapid growth of the Physical Review, in 1970, it was split into four journals, Physical Review A, B, C and D. One factor that drove this split was that many scientists had personal subscriptions to print journals at that time. (I still have one, although not to a member of the Physical Review family.) In its last year, the old Physical Review published 60 issues averaging over 400 pages each. That’s another 400-page issue roughly every 6 days. Most of the material in each issue would have been completely irrelevant to any given reader. You can imagine the printing and shipping costs, the problem of storing these journals in a professor’s office, not to mention the time needed to identify the few items of interest in these rapidly accumulating issues. So splitting the Physical Review, which in some sense had started in 1958 when Physical Review Letters became a standalone journal, was perhaps inevitable.

The new journals spun out of the Physical Review were to be “more narrowly focused”, which is, of course, a relative thing. Four journals were still to cover the entire breadth of physics. Each of the sections was correspondingly broad: PRB covered solid-state physics, C covered nuclear physics, D covered particles and fields, and Phys. Rev. A covered… everything else: the official subtitle of PRA at the time was “General Physics”, which included atomic and molecular physics, optics, mathematical physics, statistical mechanics, and so on.

Physical Review A, now describes itself as “covering atomic, molecular, and optical physics and quantum information”, other topics having over time been moved out to other journals. Physical Review E in particular was split out from PRA in 1993 to cover “Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics”. (That description has changed over the years as well, as the process of splitting and redefining journal subject matter continues. PRE is now said to cover “statistical, nonlinear, biological, and soft matter physics”. Physical Review Fluids was born in 2016 to pick up some of the material that would formerly have been in PRE.) Despite the evolution of PRA, one thing that hasn’t changed is that it has been an important journal for chemical physics right from the day it was born. This year marking the 50th anniversary of Physical Review A, and given that I trained in chemical physics at Queen’s and at the University of Toronto, I thought it would be a good time for me to write a few words about this journal. As with all of my blog posts, this will be a highly idiosyncratic and personal history.

I thought it would be fun to start by looking at the contents of the very first issue of PRA. Atomic and molecular physics featured prominently in this issue, with several papers either reporting on the results of theoretical calculations, or on the development of computational methods for atomic and molecular physics. Interestingly, the entire issue contained just one experimental paper. I suspect that this is an artifact of the period of time in which this first issue appeared. The atomic and molecular spectroscopy experiments that could be done using conventional light sources had mostly been done, and lasers, which would revolutionize much of chemical physics in the decades to follow, were not yet widely available in physics and chemistry laboratories.

One of the things that struck me on looking at this first issue is how short papers were in 1970. Excluding comments and corrections, the first issue contained 27 papers in 206 pages, so the average length of a paper in this issue was just under 8 pages. The papers in the first issue ranged from just 2 pages to 16. Eleven of these papers ran to four pages or less. And remember, Physical Review Letters was spun out more than two decades earlier, so there was already a venue for short, high-priority communications. Other than in letters journals like PRL, we don’t see many short papers anymore, and even in PRL, two- or three-page papers are a rarity. The “least publishable quantum” has grown over time, and the ease with which graphics can be generated has resulted in an explosion of figures in modern papers. I suspect, too, that concise writing isn’t as highly valued now as it was in 1970.

As is often the case in anniversary years, Phys. Rev. A has created a list of milestone papers. This list includes several classic papers on laser-cooling of atoms, a technique for obtaining ultra-cold atoms in atom traps, i.e. atoms very close to their zero-point energy within the trap. Because this almost entirely eliminates thermal noise, this technique allows for very high precision spectroscopic measurements, and therefore for very sharp tests of physical theories. Interestingly, in ion traps, the mutual repulsion of the ions causes them to crystallize when they are sufficiently cooled, which was the topic of one of my two papers in Phys. Rev. A.

The list of milestone papers also includes Axel Becke’s classic paper on exchange functionals with correct asymptotic behaviour. I have mentioned Becke’s work in this blog before, in my post on the 100 most-cited papers of all time, a list on which two of his papers appear. And as I mentioned there, Axel Becke was the supervisor of my undergraduate senior research project, resulting in my first publication, which also appeared in Phys. Rev. A. If you pay any attention at all to lists of influential papers and people, Axel’s name keeps popping up, and not without reason. He has been one of the most creative people working in density-functional theory for some decades now. Interestingly, Axel has only published three times in PRA, and I’ve just mentioned two of those papers. (Axel’s favourite publication venue by far has been The Journal of Chemical Physics.) His only other paper in PRA, published in 1986, was on fully numerical local-density-approximation calculations in diatomic molecules.

Many beautiful papers in nonlinear dynamics were published in Phys. Rev. A before the launch of Phys. Rev. E. I will mention just one of the many, many great papers I could pick, namely a very early paper on chaotic synchronization by Pecora and Carroll. Chaotic synchronization, which has potential applications in encrypted communication, became a bit of a cottage industry after the publication of this paper. I believe that the Pecora and Carroll paper was the first to introduce conditional Lyapunov exponents, which measure the extent to which the response to chaotic driving is predictable.

Currently, my favourite Phys. Rev. A paper is a little-known paper on radiation damping by William L. Burke, from volume 2 of the journal. This is a wonderful study in the correct application of singular perturbation theory that also contains a nice lesson about what happens when the theory is applied incorrectly. If you teach singular perturbation theory, this might be a very fruitful case study to introduce to your students.

I could go on, but perhaps this is a good place to stop. PRA has been a central journal for chemical physics throughout its 50 years. While PRE picked up many topics of interest to chemical physicists, PRA remains a key journal in our field. Until the Physical Review is reconfigured again, I think it’s safe to say that PRA will continue to be a central journal in chemical physics.

Scientists’ social networks

In some older posts, I mentioned some strategies for keeping up with the scientific literature, one of which was to use RSS. In recent years, social networks for scientists have emerged. These allow for both targeted and serendipitous discoveries of literature that is relevant to you. I want to emphasize that these networks are not enough. It’s still important to know how to search for specific information, for example. However, they do nicely complement the other techniques I have mentioned and, as an added bonus, they can raise your profile in the scientific community too.

There are lots of specialized social networks for scientists, but only three that cover all of the sciences and are open to all that I know about: ResearchGate, Mendeley, and Academia.edu.

I’m not going to talk about less-specialized social networks, but of course, they have their uses too. In particular, if you’re eventually going to be looking for a job, a LinkedIn profile is not a bad thing to have. I have just one piece of advice for you there: if you do get a LinkedIn profile, make sure you maintain it. At the very least, make sure that your current employment is up to date. Potential employers will look you up on the web. Having an out-of-date LinkedIn profile makes it look like you’re not taking a professional approach to your career. If you don’t think that you can adequately maintain a LinkedIn profile, you would be better off not having one at all.

I should say before I go any further that this post reflects my views, based on what I’ve found effective for me. The choice of social networks is, in the end, a personal one.


I like ResearchGate. It’s free. (They pay for themselves using ad revenue.) It’s easy to use. And it doesn’t clutter your mailbox with lots of unwanted emails. And despite the fact that they support themselves with ads, the ads are neither intrusive nor excessive in number. I’m not alone in thinking that ResearchGate is the scientists’ social network of choice. Most of the scientists whose work I try to follow are on this site.

ResearchGate’s basic paradigm is not that different from Facebook‘s: You follow researchers or specific research projects. Updates from these researchers or projects show up in your ResearchGate home page, so all you have to do is to check in once or twice a week to see what has been going on among the people you follow. Based on your activity, ResearchGate will add papers into your feed that it thinks you might find interesting. Most of those suggestions are quite reasonable and useful. Once in a while, you also get recommendations for projects or researchers you might want to follow. I personally find that a bit less useful, although once in a while someone will pop up that it would make sense for me to follow and that I wasn’t already following.

Like most social networks, ResearchGate will be most useful if you restrain your enthusiasm for following everybody in sight. Follow researchers whose ideas and research you find useful. Maybe follow a friend or two. Don’t automatically follow back everyone who follows you. If your home feed is full of useless junk, ResearchGate will become much less useful to you.

From the point of view of advertising your own presence, ResearchGate has some really nice features. You can add your publications manually, but it also scours the journals for papers you might have written. When you first sign up, you may find that you receive a lot of notifications that it may have found papers you authored. However, this dies down fairly quickly, and once it learns who you are (how you sign your papers, what universities you have worked at), it not only suggests fewer and fewer papers you didn’t author, it also tends to find your papers and suggest you add them before you have time to add them yourself.

ResearchGate also has question-and-answer forums, where you can ask questions (e.g. on techniques), or answer them. You can also follow questions when someone asks one that is of interest to you.


Mendeley is interesting because it’s not just a social networking site. It’s also a reference manager. I can’t say I’ve looked into it a lot. But I know that people who like it say very good things about it. It’s worth a look if you haven’t settled on a reference manager and want a Swiss-army knife that both keeps your bibliography and lets you find interesting references.


I’m not a fan of this one. It has a free version that has very limited features, and a pay version they are forever trying to get you to sign up for. If you sign up for Academia.edu, you will receive many, many emails from them. It’s probably possible to control this behaviour, but Microsoft Outlook’s Clutter feature does a good job of keeping these emails out of my sight, so I haven’t bothered. I think that some universities have subscriptions to Academia.edu. I would tend to stay away from this one unless you work at a place that has a subscription.

Some tips for research scholarship applications

Last term, I sat on a graduate scholarship committee for the first time in a few years. I noticed a few common errors, and at the encouragement of a colleague, I have turned this experience into the blog post you are now reading.

Many scholarship applications will require a brief research proposal. Here are some things you should think about if you have to include a proposal in your application:

  1. The proposal has to be well written. If you’re not naturally a good writer, show your proposal to someone who is. Bad spelling and grammar reflect badly on you. Poorly constructed sentences and paragraphs that obscure the point you are trying to make are even worse. They suggest that you don’t care enough to proofread your work carefully and/or to get someone to proofread it for you. This advice of course extends to other parts of your application.
  2. It should be clear how your work fits in a larger context. Here’s a made-up example: Student X wants to synthesize molecules containing some weird new functional group. That’s great, but unless you explain it to me, I don’t know why anyone would want to do that. Are these molecules theoretically interesting? Do they have potential applications? Do they extend our knowledge of chemistry in a new direction, and if so, what is that direction and why should I care? This comment is, of course, more general than the example above, and would extend to a proposal to prove a theorem, to study distant galaxies, etc.
  3. Almost all scholarships and postdoctoral fellowships are judged by panels of non-experts, so write your proposal for a general scientific audience. In part, this connects to my previous point: It may seem self-evident to you why you would want to study protein Y, and perhaps it is to people in your field, but it may not be obvious to a scientist outside of your field. Beyond that, you need to define non-obvious abbreviations, avoid highly specialized jargon if possible, etc.
  4. The proposal’s scope should align with the level at which you are applying. Don’t propose 20 years of work in an M.Sc. application. Don’t propose something very limited (in time and/or intellectually) in a Ph.D. or postdoc proposal. The latter is a surprisingly common (and fatal) error. We might forgive the over-eager M.Sc. applicant, but we can’t forgive a Ph.D. applicant whose proposal doesn’t look exciting. If you are competing for a scholarship, you are competing with other people who have proposals that have some real intellectual interest. If you are making systematic measurements of some property, unless you tell me otherwise, it might look like work for a technician.How does your work tie in to major theories in your field? What is the potential for it to change how we think about certain issues? Do you need to develop new measurement methods that will be more broadly applicable?

Some Canadian (especially Tri-Council) scholarship applications ask you to comment on your most significant contributions. Other scholarship competitions may ask for something like this with different wording. Such a section is not about why the work is significant to you. It is about the significance of your work to your field. In some cases, especially if you’re just getting started in research, your most significant contribution may be a conference presentation. If it is, nobody cares that you really enjoyed presenting your work to leaders in your field. What we care about is if your work represents a real advance. Interest from leaders in your field may be evidence of that, especially if they followed up with you after your talk. But the emphasis is on what they got out of it, not what you got out of it. If you can, try to tell us how your work requires new thinking about some issue or other in your field. Or maybe tell us how your work opens up new vistas. The same goes for publications. I’m sure it was exciting to get your paper published in the Elbonian Journal of Science, but what I really care about is the science in the paper, and whether you can tell me why it was important. (In fact, I probably care more about whether you are effectively communicating the importance of your work than whether I fully buy your argument. When I sit on these committees, I’m evaluating you. One of the things I want to know is whether you can craft a coherent argument.) Since you probably don’t have any experience writing this kind of text, it is imperative that you get an experienced pair of eyes (e.g. your supervisor’s) on this section of your application.

Many scholarship applications will ask for a summary of most recent completed thesis (or equivalent). When an application has a section like this, we expect you to use most of the space to tell us about your past work. What did you do? How did you do it? Why was this a hard thing to do? What was learned? And yes, why was it important? If you write three lines when we gave you a page, it’s not good. You need to give us some details here. It’s your work. You should be able to was poetic about it.

In fact, as a rule, you should use most of the space allowed for any given part of your application, provided of course the section is relevant. (On occasion, there will be sections that you can’t use. For example, if you’re asked to list publications and you don’t have any, you clearly can’t use this space.) Don’t make stuff up, but not having much to say about yourself or your work is generally considered a negative.

Academia is slowly becoming more progressive. Accordingly, most scholarship applications will have a section in which you can talk about any obstacles life threw in your way, good or bad, that might have affected your performance. I know that some people are afraid of using these sections, but in fact you should if there is something we should know about. We are genuinely trying to take life circumstances into account when we evaluate scholarship applications, among other things. The kinds of things you might want to let us know about include having a disability (that you could document on request), taking time off to start a family, having to look after a sick parent or child, and so on. If anything has held you back from taking a full course load, completing a degree in the “usual” amount of time, or negatively affected your grades over some period of time, let us know. We can’t take it into account if we don’t know about it.

Maybe I can close with a bit of general advice: The best way to learn to write good proposals is to work with someone who has been successful at this skill. Ask your supervisor or other mentors who are more advanced than you to look over what you have produced. Take their advice to heart. Don’t take it personally if they are very critical. In fact, you should especially thank the people who are very critical of your applications. They’re usually the ones who are giving you the most important feedback.

Running xppaut in Windows

Running xppaut in Windows is sometimes tricky. My new book on nonlinear dynamics gives brief instructions on installing the Cygwin X server and xppaut in Windows, but I’ve often had trouble getting xppaut to play nice with Cygwin/X. After playing around with today, I think I’ve come up with a set of instructions that will work every time. And of course I expect to be proven wrong almost immediately… However, I’m still happy to share what I’ve learned.

What I’m trying to achieve here is a low-fuss installation that will let you run xppaut from the command line. Because I’m much more familiar with Unix shell programming than with DOS batch files, my solution involves the former. You’re going to be installing Cygwin anyway, so we might as well take advantage of its full power.

I will be leaving you to read the documentation for the details of how to accomplish some of the tasks below. None of them exceed the intelligence of an average person, and links to the documentation are given. Here are the steps:

  1. Install Cygwin. In the package installer, select the latest versions of xinit, xset and xhost for installation.
  2. Get the xppaut for Windows zip file. Unzip the package and put the xppall folder that it contains somewhere sensible. Bard Ermentrout recommends the top level of the boot (C:) drive, but I don’t think that’s necessary.
  3. Add the xppall folder’s location to your PATH environment variable. If you put this folder at the top level of your C: drive, you would add C:\xppall to your PATH variable.
  4. Create the following file using a file editor (Windows Notepad, or a Unix editor like vi or emacs; emacs must be installed first with the Cygwin installer if you want to use that) in the xppall folder that you just installed:

# Script to run xppaut in Cygwin using the Cygwin/X server.
# You can call this script xpp, then invoke it on the command line as you would xppaut.

# Start X server if one isn't already running.
export DISPLAY=
if ! xset q >&/dev/null; then
    startxwin -- -listen tcp >&/dev/null &
    # The following 5-second pause will slow down startup, but ensures that the
    # Xwindows server is up before trying to call xhost, which otherwise may hang.
    sleep 5
    xhost + >&/dev/null

# Run xppaut, passing through any command-line parameters supplied to this script.
xppaut $1 $2 $3 $4 $5

I recommend that you save this file to into the xppall folder, using the file name xpp. Now open a Cygwin terminal and issue the following commands (assuming you put xppall at the top level of the C: drive):

cd /cygdrive/c/xppall
chmod u+x xpp

This will make this file executable. (It may already have been, but you might as well make sure.) If all went well, you should now be able to run xppaut by typing ‘xpp file.ode’ in a terminal window where, obviously, ‘file.ode’ would be replaced by the name of an ode file in the current working directory. There are a bunch of ode files in xppall/ode. I usually test a new installation of xppaut using lorenz.ode.

Note that this will work provided you do not start the XWin Server from the Start menu.

By all means let me know if you try this and run into problems. Within reason, I will try to help.

Frequently confused words

Some words are very frequently confused. Sometimes, this makes the writer’s intent unclear. In other cases, the meaning of the sentence may be clear, but it’s still distracting to those readers who know the difference. So it matters.

This little blog entry focuses on words that commonly appear in scientific writing and that are often confused or misused. There is a longer list of words commonly confused in general writing here: http://writing2.richmond.edu/writing/wweb/conford.html. By all means consult this source in addition to this post.

Principle/principal: “Principle” is a noun that means a fundamental rule, truth or law. It is never an adjective. The adjectival form of this word is “principled“. “Principal” can be either an adjective or a noun. As an adjective, it means “main” or “most important”. So all of you PIs out there are “Principal Investigators”. I hope that you are also “principled investigators”, but “Principle Investigator” would mean someone who carries out research into principles, which I suppose might be applied to ethicists, although it would be unusual to do so. As a noun, “principal” can have one of two meanings: It can mean the main person involved in some affair or transaction, as in “the principal in a lawsuit”, who might be the main plaintiff or defendant, or it can be the title of the leader of an educational institution, e.g. the “Principal of Queen’s University”.

Adapt/adopt: A thing that is adapted is changed to suit some particular purpose. For example, a figure that was adapted from a source was not just copied. Some details of the figure were changed, or else the original was used as a model for a new figure that still retains some resemblance to the original. On the other hand, something that is adopted is just used as is, without modification. You can, for example, adopt the procedure of Smith et al. (1902), which means that you used their procedure exactly as they described it. You can also adapt Smith et al.’s (1902) procedure if you need to change it to use it in a new context, or to work with a different set of instruments, etc.

Affect/effect: This pair can be confusing because both of these words can either be a noun or a verb, but with different meanings. I’m going to focus here on the most common uses of these words in scientific writing. If you’re a psychologist, you’re going to need to do additional reading on this topic because in that discipline, the noun forms of these words have highly technical meanings that you simply have to get right.

Almost always in scientific writing outside of psychology, you’re going to use “affect” as a verb and “effect” as a noun. If you just remember that, you should be in good shape. The verb “affect” means “to produce an effect in”. (Note the use of the nouneffect” in the definition of the verbaffect”.) So, for example, the weather affects the timing of plant flowering. The noun “effect” designates a consequence of some causative event or agent. Late flowering is an effect of cool weather. Similarly, we talk of cause and effect, not cause and affect, unless you’re a psychologist.

Complimentary/complementary: In scientific writing, you want “complementary”. “Complimentary” refers to receiving praise, or being given something free-of-charge, as in “complimentary drinks”. “Complementary” has the sense of one thing completing another. Thus we have complementary angles, complementary base pairs, etc.

Infer/imply: All of the words we have looked at so far had similar spelling. This pair falls into a different category of words that are semantically related. Inferring is a logical deduction made by a person. Note that a person infers something. Lately, I’ve been noticing people using infer when they should be using imply. To imply something is to suggest it. Data can imply a particular conclusion. But only a person can infer that the data implies something. A person infers. Data implies.

Roll/roleRoll has to do with the action of rolling. For example, one can roll dice, or roll across the countryside in a car. A role, on the other hand, is a part that something plays. So mitochondria play a central role in the energy metabolism of a cell, for example.

Refute: This word isn’t a member of a simple pair, but lately I have noticed it being misused quite a lot. “Refute” has exactly one meaning: it is to prove an argument or hypothesis wrong. Note the word “prove”. To refute something is not merely to argue against it, or to provide a counterargument, or to present contradictory data. If you have refuted a hypothesis, it’s dead. It’s a very strong word, and rarely applicable. But good for you if you have managed to refute something. It’s probably a significant achievement. If it’s still at the stage where the thing is debatable, then you need a different word. It’s hard to give specific advice here, because there are many possible nuances, but here are some possible phrases you might use: “argue instead/against”, “provide a counterargument/rebuttal”, “reply”, “respond”, “cite as evidence against”, “deny”, “contradict”, “dissent”, “reject”. The variety of nuance in just these options hopefully suggests one of the problems with misusing “refute”: if it’s clear you don’t actually mean that something was conclusively disproved, what do you in fact mean? If you’re tempted to use “refute”, I would strongly suggest that you think carefully about what you really mean, and then use plain language, which may involve a complete rewriting of your sentence. For example, “Jones and Wang (2001) refuted Amato and Sveshnikov’s (1998) hypothesis”, if it doesn’t actually mean that they disproved the hypothesis, might be rewritten in any of the following ways, among many others, depending on what you’re trying to say: “Amato and Sveshnikov’s (1998) hypothesis was contradicted by Jones and Wang’s (2001) interpretation of the data”; “Jones and Wang (2001) showed that Amato and Sveshnikov’s (1998) hypothesis was more plausibly consistent with…”; Jones and Wang (2001) argued that Amato and Sveshnikov’s (1998) hypothesis was incompatible with…”

Delivery of a clear message requires clear language, and that means using the right words to express a thought.

Climate change mitigation measured in gas tanks

A lot of the discussion around what we need to do to slow down climate change is described to us in tonnes of CO2. The trouble is of course that most of us don’t know what a tonne of CO2 looks like. I thought I would try to bring this discussion into terms that most of us would understand by rephrasing it in terms of gas tanks. Keeping in mind that not all carbon emissions come from burning gasoline in a car, a gas tank is still probably a more useful visualization for most of us than a tonne of CO2. Note also that what we really care about is the total warming potential of all greenhouse gases released into the atmosphere, which is usually measured in CO2 equivalents. But since the basic unit of measure is still a tonne of CO2, the discussion below is framed in terms of CO2.

First of course we have to decide how big a tank we’re going to use. Because there’s a precedent for using a 50 L tank, that’s what I’m going to use as my standard tank. That’s the size of tank you have in a typical smaller car. At 2.3 kg of CO2 per liter of gasoline, a 50 L tank will produce 115 kg of CO2 when burned in your automobile engine. Conversely, a tonne of CO2 would be equivalent to about 8.7 tanks.

To meet its Paris accord commitments, Canada needs to cut its emissions by about 205 million tonnes of CO2 between now and 2030. (Some of you will say, “but our Paris commitments aren’t enough!” You’re right, of course, but it’s a baseline to aspire to in the short run.) As I write this, the population of Canada is about 37.6 million, so that’s 5.5 tonnes per Canadian per year. That’s about 48 gas tanks per person per year. Note that this figure includes CO2 emissions from industry and from private use, but keep in mind too that this does not include all of the carbon emissions you are responsible for through your purchases of foreign-made goods, which are accounted for in the country where these emissions are produced. So, for example, if you buy a pair of shoes made in Vietnam, those are Vietnam’s emissions, even though you are the person driving these emissions. StatsCan tried to estimate household contributions to greenhouse gas emissions (not including foreign emissions for goods imported into and consumed in Canada) a bit over a decade ago, and found that households were responsible for about 46% of Canada’s greenhouse gas emissions, either directly or indirectly. Assuming a similar ratio still holds, each of us is on the hook for about 22 gas tanks per year.

I don’t know about you, but I don’t think I fill up my gas tank 22 times per year. Remember: those are 50 L tankfuls. A lot of the times I “fill” my tank, I’m only buying 30 or 40 L of fuel. So I could stop driving completely, and that wouldn’t do it, especially when you consider that I live in a three-person household with just one car, so I can’t count on my wife and son to cut 22 fill-ups of cars they don’t have! The idea here isn’t to think in terms of literal gas tanks, but in terms of gas-tank equivalents. Between the three of us, my wife, son and myself need to cut about 66 gas-tank equivalents out of the emissions we’re responsible for.

There are plenty of web sites that will tell you what you can do to reduce your personal carbon emissions. Clearly, if I can drive less and use my bike or public transit more, that helps. Equally clearly, that alone won’t get us there. One of the things that will make a big difference that politicians don’t like to talk about is that we’re probably all going to have to just buy less stuff. I’m going to pull a few figures from Mike Berners-Lee’s excellent book How Bad Are Bananas? to make this point.

Let’s say that building the car you want to buy will produce 15 tonnes of CO2, about what it takes to build a midsize car. That’s 130 gas tanks. You could of course avoid causing those emissions by buying a used car, which won’t cause any extra emissions. But of course, eventually someone has to buy a new car (assuming we don’t all start riding public transit, but that only works for urban dwellers), and let’s suppose that you decide that you really want a new car. You could just buy a smaller car. Some cars have an emissions impact of as little as 6 tonnes of CO2, or 52 gas tanks. Even if you don’t go to the smallest car available, you could easily shave 30 or 40 gas tanks from your emissions just by buying a smaller car.

But wait! Those emissions should be amortized over the time you own the car, right? The average Canadian owns a new car for about 6 years before trading it in. So the impact of your 130-tank car over your period of ownership is about 22 gas tanks per year. Coincidentally, this is how much you need to cut out of your annual emissions, so if you can go car-free, you’ve pretty much done your part (but you might have to find other reductions if a family is sharing a car, as in our case). Going to a smaller car might save 7 gas tanks per year, which is about a third of the 22 tanks per year you need to cut out of your lifestyle. Not bad! But what if you really want that 130-tank car? If you keep it an extra two years, the impact of your new car becomes about 16 tanks per year, so you are reducing your carbon emissions by about the same amount as you would by buying a smaller car, just by keeping your car a bit longer. And obviously, this emissions reduction strategy just gets better the longer you keep the car.

And what about those Vietnamese shoes I mentioned earlier? Making the average pair of shoes and transporting it to a store near you results emissions of about 11.5 kg of CO2, or about a tenth of a tank of gas. I probably buy two to three pairs of shoes per year, so for me, this isn’t worth thinking about. But if you’re a shopaholic who loves shoes, well, I’ll let you do your own calculation…

I suspect that if you’re going to buy clothes, shoes and accessories and are actually going to wear them until they’re ready for disposal, there probably aren’t significant emissions savings to be made by changing your shopping habits. However, some of us, and you know who you are, do buy stuff we won’t wear much before putting it into the basement. Then those fractions of a gas tank really start to add up. As a general rule, buy less, and buy used if you want to cut your carbon footprint. This applies not only to clothes, but to anything else we buy on a whim and then barely use.

And the general idea of buying what you need and using it applies to food, too. Food waste is a massive contributor to greenhouse gas emissions: Because food is wasted, it is necessary to overproduce food, which leads to deforestation, i.e. loss of an important carbon sink. Moreover, agriculture has a direct energy cost, so more food grown means more emissions from the agriculture sector. Then there is the transport of food that will never be eaten. And rotting food often produces methane, an even more potent greenhouse gas than carbon dioxide. A rough estimate is that household food waste (as opposed to food that is wasted somewhere in the supply chain) amounts to about a quarter tonne of CO2 per person per year in Canada, or 2.2 gas tank. Not a huge number, but still about 10% of the emissions you need to cut per year. Roughly speaking, to reduce the amount of food you waste, you have to buy things you plan to eat, and make sure you actually do use it before it goes bad. Sounds simple, but it does take a bit of a mental adjustment to our shopping and cooking habits.

So there you have it. Climate footprint and emissions reductions conceptualized in gas-tank equivalents. Hopefully this helps you understand the size of the problem a bit better, and also puts in perspective some of the things you can do to reduce your climate impact. A lot of the advice comes down to buying less stuff and using it for longer (or using it at all in the case of food). And as an added bonus, if you spend less, you’ll have more money in your bank account for a rainy day. Win-win.

Publications in CVs

I’m currently chairing the Ph.D. program committee at the University of Lethbridge, and I just finished reading the files of the applicants who have applied to our program for admission later this year. At the UofL (and elsewhere), students applying to the Ph.D. program have to submit a CV. And of course, if you have publications, they should be in your CV. The trouble I’m having with many files I’m reading is that students don’t give full bibliographic details for their papers, which means that I sometimes have to do some additional digging if there is something I want to check on. Here are some things I sometimes find missing:

  1. A page range or article number. Yes, I know, the DOI should be enough, but if I decide to go looking for your paper for some reason, it’s often more convenient to have the first page number or article number (along with the volume number) than the DOI. Why? Because some journals make it particularly efficient to find papers with the volume and page number.
  2. The DOI. At the risk of contradicting myself, it’s sometimes easier to have a DOI. The DOI is especially useful if the journal is a bit obscure.
  3. The volume number. Well, duh!, you might say. But a surprising number of people forget to put that in.
  4. The year. Ditto.
  5. The issue number can be useful, depending on the journal, so by all means include that, too.
  6. For articles in journals that use article numbers rather than pages, the number of pages. This gives me some idea whether I’m looking at a letter-style publication or a full paper. I know it’s not foolproof, but it does help.

The point is that the more bibliographic details you include, the easier you make it to find your paper should someone wish to do so.

Finally, make sure that those bibliographic details are correct! You would be surprised at how many slightly mangled journal titles there are in people’s CVs, for example. That makes it hard to find the paper. It might cast doubt on whether the paper exists at all. Or it might just convince a person reading your CV that you don’t pay much attention to detail. Probably not the impression you want to leave.

On a related note, if you have multi-authored conference presentations in your CV, please clearly indicate if you were the presenter or not. You can use a special mark (asterisk, boldface or italics) for the presenter, or you can separate your presentations into ones you have made and ones that author people presented. Without this, long lists of multi-authored presentations are uninformative, and may be seen as padding your CV.

Before you write your thesis, read the instructions

I have a little tip today for those of you preparing to write a thesis: Before you start, read your university’s or department’s thesis guidelines. There are some things that are easy to do as you’re writing your thesis, but a pain to do after, like compiling a table of abbreviations, which is usually required. If you read the thesis guidelines before you start writing, you can make notes of the things that you will need to do, and probably save a lot of time later on. It’s quite likely that you will discover things you’re supposed to do that you wouldn’t otherwise have thought of on your own.

I would also suggest that you frequently go back to those guidelines during the writing process. If you’re wondering how you’re supposed to format figure captions, the thesis guidelines probably answer this question. If you’re not sure what is expected in a thesis abstract (it varies from school to school), or whether you need to write a longer summary in addition to the abstract (required in some places), look no further than your university’s thesis guidelines.

Every School of Graduate Studies has a person whose job is to make sure that theses meet the local requirements. This person generally doesn’t look at your thesis until you have defended it and have completed your revisions. It’s a lousy time to find out that you need to add something, or rewrite the abstract, or reformat the whole thing. A few minutes of reasonably careful reading ahead of time will save you all these headaches. It’s a smart investment of your time.

Incidentally, the same principle applies to lots of other things: Reading instructions for scholarship or grant applications, or instructions in job ads about what you are supposed to submit in your application, will generally repay handsomely the small amount of time you devote to this activity. In the case of a thesis, the worst that will happen if you mess something up is that you will be told to fix it. For a grant or job application, not following the instructions may mean that your application isn’t even considered.

So just “read the instructions, that’s how you get it right”, as the Doodlebops so eloquently put it.