“Hey” as an email greeting

phd042215s

Reproduced with permission from “Piled Higher and Deeper” by Jorge Cham
www.phdcomics.com
.

Like every other teacher, from time to time, I receive an email from a student written in an excessively informal style. Cham’s comic, reproduced above, is pretty close to some of the emails we receive. About a year ago, after receiving yet another such email, I decided to send the following to one of my classes (copied unedited in its entirety):

There’s more to university than just learning formal material, so I hope you won’t mind if, from time to time, I give you some life advice.

Recently, I have received a number of emails from students that start with “hey”, or other equally informal forms of address. Now, I don’t insist that you call me “Professor Roussel” (although you certainly can), but you should watch how you write emails to people who aren’t in your close circle of friends. At this point, you are training to become professionals. You will (hopefully) all end up in responsible positions where you will eventually be interacting both with people supervising you and with people you are supervising. Both your superiors and the people who report to you expect professionals to maintain a certain level of decorum, and people sometimes get very offended when an email starts with “hey”. It’s just not a respectful form of address.

You’re now at a place where we expect a bit more formality. It’s time to start writing emails that look like they were written by a grownup. Take a moment to write a proper greeting when you’re writing one of your instructors. “Dear Marc” is OK, if you’re comfortable with that. Otherwise, “Professor Roussel” will do fine. Most of your emails will be asking your instructors for something: information, help with something, an appointment. It’s best in those cases to put your best foot forward and to be polite. Moreover, this is where you form the habits that will carry you through your first few years of work. You just don’t want to write your boss an email that starts with “hey”.

By the way, this isn’t directed at anyone in particular. I don’t require an apology from anyone. I just want to help you take the next step in your development as young adults.

Some months later, I was approached by a fellow faculty member who told me that a young person she knew had told her about this email, and that she liked it so much that she was planning to show it to some of her students (with my permission, which I was only too happy to give). And then there’s a blog post by Chris Blattman on email etiquette for students that contains somewhat similar comments, as well as the brilliant Ph.D. comic (which was published and added to this blog post long after this blog post was originally written). (Thanks to Paul Hayes for bringing Blattman’s blog post to my attention.) Clearly I’m not the only person who thinks this way.

I wrote the above email to my students thinking that I would get them thinking about appropriate levels of formality for different situations. (Other than the indirect report mentioned above and one other indirect report, I have no idea whether I made any impact. There was one student who commented on their teaching evaluation that they thought this email wasn’t treating them “as adults”, but of course the point of my email was that some of them weren’t writing adult emails.) As I wrote at the time, university isn’t just about learning specific subject matter, and I worry that the current generation of students is badly equipped for the social aspects of the work world. A lot of the rapid communication methods we have now assume a certain level of informality. (You’re not going to have elaborate greetings in a 140-character tweet or, generally, in a text message composed on a phone.) The problem is that this informal style of communication doesn’t translate well to other media, to many social situations, across age groups, or across cultures. When, where and how are students supposed to learn this? It seems to me that electronic etiquette has to be woven into the informal curriculum from a young age, and that it needs to be reinforced all the way through the education system. Right now, we have a generation of teachers (and I include myself in this group) who mostly came to electronic communications after they had learned other means of communication. While, in many contexts, this is viewed as a disadvantage, in this case I think that we older folk are actually better equipped to navigate the multiple levels of formality needed to get through a day, including the correct levels of formality to use in various forms of electronic communications. When I was learning to write letters in school, they first taught us how to write formal letters. Having learned what a formal letter looked like, relaxing some of these rules when communicating with friends or loved ones became a conscious choice, making it unlikely that we would, say, write an overly informal memo to the boss. Going the other way, starting with informal communication styles and then trying to raise the level of formality as required, is, I suspect, much harder.

Chemists and ethics

A few weeks ago, I read a very interesting paper surveying graduate students about the ethical behavior of people around them.1 The paper is a little old, but it’s still worth a read, if only to remind ourselves that students do see things, and that what they see affects them in various ways.

Basically, the survey asked students if they had seen various kinds of misconduct. As a chemist, the following passage really struck a nerve:

Chemistry students are most likely to be exposed to research misconduct, chemistry and microbiology students are most likely to observe employment misconduct… [p. 337]

Now that bothered me. Why should there be more misconduct in chemistry than in other fields of research included in this survey? Now I know enough to understand the limitations of surveys with smallish samples, particularly surveys that rely on voluntary participation, but still, it bothered me. The paper does go on to discuss departmental characteristics that explain much of the variance between disciplines, so that these results may reflect a small-sample fluctuation at the department level. Still, it bothered me.

Reading this paper started me thinking about other things. A few years ago, I had the privilege to teach a course entitled Contemporary Chemistry. This is a regular course offering in our Department, which all undergraduate chemistry majors have to take as part of their degrees. We do a number of different things there: The departmental seminar program runs in the time slot of this course; we work on the students’ writing and oral presentation skills; and when I taught it, I introduced an ethics module. It was interesting getting students to grapple with various ethical dilemmas, including their responsibilities as members of a profession, which was a new idea for them.

While preparing for this course, I bought a book entitled The Ethical Chemist by Jeffrey Kovac.2 There are a lot of things I really liked about this book, particularly its emphasis on ethics as a practical matter: Like it or not, you’re going to run into ethical conundra, so you need the knowledge and skills to deal with them, just as you need to know how to recrystallize compounds or interpret NMR spectra. The book includes a rich variety of case studies, some of which are less straightforward than others. If you’re going to teach an ethics module to chemists, I highly recommend this book.

I did find myself in disagreement with this book in its discussion of the reporting of yields, presented as a set of case studies. Here is an excerpt from one of these case studies:

You […] have just finished the synthesis and characterization of a new compound and are working on a communication to a major journal reporting the work. While you have made and isolated the new compound, you have not yet optimized the synthetic steps, so the final yield is only 10%. From past experience, you know that you probably will be able to improve the yield to at least 50% by refining the procedure. Therefore, when writing the communication, you report the projected yield of 50% rather than the actual figure. [p. 29]

Then, when discussing this case study, Kovac makes the point that scientific papers tell a linear story, without all the twists, turns and dead-ends of laboratory research. So far, so good. However, he then writes

[…] an experienced researcher is convinced based on past history that the yield can and will be improved. Why not report the higher figure? By the time anyone reads the article it will be true. [p. 30]

I have a bunch of problems with this suggestion:

  • It may be a high probability statement that the yield can be improved to 50%, but it’s not a certainty. What if you can only get the yield up to 30%? It wouldn’t be research if we knew ahead of time what the outcome was going to be, and you really don’t know that you can get a 50% yield until you actually get a 50% yield.
  • Let’s say you are eventually successful in reaching a 50% yield. You won’t get there with the reaction conditions you put into the manuscript. Those conditions will get you 10%. Every chemist I know has, at some point or other, complained that their colleagues withhold important details when writing up their syntheses. Here, you’re not withholding information you have in hand, but the effect is the same: You can’t get a 50% yield with the conditions disclosed in your paper. You might be giving your lab an edge by optimizing the synthesis after the paper has been sent out, but if you can’t correct the synthetic conditions before the paper is published, you are going to be wasting the time of every other lab that wants to follow up on your work.
  • If you can just put a larger number in the paper without doing the work, why would you optimize the synthesis at all? To me, making up numbers because you “know” you can get there is an extraordinarily slippery slope.

It’s an excellent book, and I really don’t want to beat up on Kovac. However, I wonder how other chemists feel about this? At what point have you crossed a line from presenting your data in its best possible light to fabricating? Even if we intend to correct the reactions conditions in the galleys prior to final publication, what are we teaching our students if we tell them that we’ll embellish the results now to maximize the probability of acceptance, or to make sure we win the race with another lab, or to pad our CVs before some grant or scholarship competition?

At some point, we have to say that we’re going to hold ourselves to the highest possible standards so that our students don’t grow up in an environment where they routinely observe misconduct.1 I think we owe it to them, and we owe it to the society that pays us to do research and to teach.

1Anderson, M. S.; Louis, K. S. & Earle, J. Disciplinary and Departmental Effects on Observations of Faculty and Student Misconduct. J. Higher Ed., 1994, 65, 331-350

2Kovac, J. The Ethical Chemist. Pearson Prentice Hall, 2004

Heat and energy

A few months ago, I had a very interesting email interaction with Stephen Rader of the University of Northern British Columbia about heat, energy, and related concepts, prompted by his reading of my book. With Stephen’s permission, here is a transcript of that exchange, slightly edited so that it makes sense as a dialog:

Stephen: As I read the intro chapter to thermodynamics, I was interested to see that you define heat as an interaction between systems. I have always thought of heat (and explained it to my students) as energy — something that a system can contain different amounts of. Am I mistaken? Your definition makes it sound as though there cannot be any heat without having more than one system, which I am having a hard time wrapping my head around.

Marc: This is a subtle question. The short answer is yes, you have been mistaken, but that a lot of very bright people have also struggled with this question.

The proof that bodies don’t contain heat (which was the basis of the caloric theory) is contained in Rumford’s canon-boring experiments. Rumford was a British physicist and professional soldier who was assigned the task of overseeing the boring of canons in Munich at one point in his colorful career. He became interested in the amount of heat liberated by this process, and made some rough measurements of this heat. The amount of heat liberated was extremely large, and incompatible with the existence of the metal in solid form should this amount of heat have been held in the body prior to the act of boring. (The caloric theory would have said that boring the canon released the caloric it previously contained. I realize that you wouldn’t have explained this experiment this way, but bear with me.) We now understand this experiment as follows: During the boring of Rumford’s canons, work is done by the boring tool on the metal blank. The work represents a transfer of energy from some energy source turning the tool to the tool/blank/cuttings system. This energy shows up in the “products” in various ways: Cutting, which breaks metallic bonds and also tends to create crumpled cuttings, obviously takes energy, which increases the potential energy of the products in various ways (mostly, the potential energy associated with the ability of the cut surfaces to form new bonds is increased). Much of the energy not used for cutting per se ends up being stored in the tool, blank and cuttings. How is it stored? Basically, it is stored by populating higher energy levels of the (in this case) metallic lattice, which is to say that it raises the temperature of the object. If this excess energy (relative to the thermal population at ambient temperature) is subsequently transferred to another material by contact alone (to the air or to the water typically used to cool the tool and workpiece in these processes), then we can talk about heat transfer. However, the excess energy could also come out (in part) as work: I could use it to operate a heat engine or a thermoelectric generator. There is therefore no way to identify how much of the excess energy held by the body is heat. While it’s stored in a body, it’s just energy.

Now consider an ideal gas compressed reversibly and adiabatically. We’re doing work on the gas, but you’re probably familiar with the fact that the temperature of the gas will increase during this operation. The work done has been stored as energy in the gas, resulting in a temperature increase. (We could make a similar argument for a non-ideal gas, but there we would have the added complication that the energy depends on both the temperature and the volume.) One possible way for this energy to come out is as heat: If we put the gas in thermal contact with a body at a lower temperature, heat will “flow” (bad language that is a holdover from the caloric theory, along with heat “capacity”). However, because the process was reversible, we could get all the original work out by just reversing the path, returning the gas to its original state. The key point here is that energy was stored. Whether we get out heat or work or a combination of both depends on how we allow the gas to interact with its surroundings after the energy has been stored.

I hope that helps a little. As you noted yourself, it’s a hard issue to wrap your head around, especially since we have inherited a lot of unhelpful language and consequent imagery from the caloric theory. Many otherwise fine textbooks still describe these issues using inappropriate language. The idea that bodies store heat seems to be hard to extricate from the literature, and has become deeply embedded in our way of thinking. I had to write the paragraph on Rumford’s canons above extremely carefully to avoid falling into misleading language or imagery myself.

Stephen: Do I understand correctly that the major flaw in the caloric theory is that, when we put energy into a system, we don’t know how it partitions between kinetic motions (that can be measured as an increased temperature) and other types of internal energy?

Are you, in effect, saying that we should not talk about heat except when it is transferred from one object or system to another? In other words, since the amount of available energy in a system is not defined until one tries to get it out, and how we get it out determines how much there is, that we can only talk about the energy of an object or system, rather than how much heat is in it?

I tend to think about thermodynamic properties in very concrete terms (what the atoms are doing), which probably hinders my understanding of some of these concepts.

Marc: These questions raise several interesting issues. I will deal with them one at a time.

Do I understand correctly that the major flaw in the caloric theory is that, when we put energy into a system, we don’t know how it partitions between kinetic motions (that can be measured as an increased temperature) and other types of internal energy?

I wouldn’t say so. The real problem was that various lines of evidence acquired by Rumford, Joule and others show that heat can be made (in sometimes impressive quantities) by processes that cannot be explained as involving the liberation of heat already contained within a body.

It’s important to try to tease apart the concepts of heat and temperature in our heads. They have some important connections, but they are intrinsically different. By bringing up “kinetic motions”, I suspect that you are thinking of the definition we’ve all heard of heat as kinetic energy. The trouble is that there isn’t a clean distinction between kinetic and other forms of energy in quantum mechanics, and that if we’re out of thermal equilibrium, a system can have several different “temperatures”, or none at all.

Temperature is a surprisingly difficult quantity to define, but I think that most people in the field would tend to define it these days in terms of the Boltzmann distribution: The temperature is the value T such that, at thermal equilibrium, the energy of the system is distributed among the available energy levels according to a Boltzmann distribution. Now consider a monatomic gas. Such a gas can store energy in two significant ways: translational (kinetic) energy and electronic energy. At room temperature, the gap between the highest occupied and lowest unoccupied orbitals is so large that essentially all the atoms are in their electronic ground state. From a macroscopic, thermodynamic point of view, we would tend to say that no energy is stored in the electronic energy levels (give or take the arbitrary nature of what we call zero energy, which isn’t relevant to energy storage, the latter only involving energy I could somehow extract from the system). Now imagine that I have my gas inside a perfectly optically reflective container. At some point in time, I open a small shutter, and fire a laser into the container whose wavelength is tuned to match match an absorption wavelength of the atoms. Some of the atoms will absorb some of the photons and reach an excited state. If, before this system has a chance to equilibrate, I ask the question what is the temperature of the system?, I now have a problem. The translational energy will still obey a Boltzmann distribution for the pre-flash temperature T. However, the electronic energy distribution does not correspond to a Boltzmann distribution with temperature T. It may correspond to a Boltzmann distribution with a much higher temperature Te. If the laser was sufficiently intense, I might even have created a population inversion and have an electronic energy distribution that corresponds to a negative absolute temperature. (Negative absolute temperatures are a strange consequence of the way we have defined our temperature scale. They are hotter than any temperature that can be described by a normal Boltzmann distribution with a positive temperature.) The system therefore has, at best, two distinct temperatures. It’s also possible to come up with distributions that can’t be described by a temperature at all. (This might require two laser pulses at two different wavelengths.) Now if we wait long enough, we will return to an equilibrium (i.e. Boltzmann) distribution in this particular system. Getting back to heat now, it’s clear that the system we have prepared with our laser pulse is “hot”: I could certainly extract energy from this system as heat. However, the system doesn’t, for the time being, have a single temperature, and the translational energy temperature grossly underestimates the energy available in this system. (Electronic energy can’t be teased apart into kinetic and potential energy contributions due to the way these quantities appear in quantum mechanics.) This is another way in which the connection between heat and temperature is problematic. Note also that in a nonequilibrium situation like this one, thermometers of different constructions would register different temperatures, unlike the situation for matter in equilibrium.

I’m cheating a little bit by describing a system out of equilibrium, because the classic thermodynamic theory is a theory of equilibrium states. Nevertheless, the basic problem remains that from a statistical thermodynamic standpoint, temperature measures (loosely speaking) the energy levels accessible to a body, from which we can (if we have enough information about the energy levels) compute the total energy (relative to some arbitrarily selected zero, often the ground-state energy). There is no useful microscopic construct that corresponds to heat stored.

Are you, in effect, saying that we should not talk about heat except when it is transferred from one object or system to another?

Yes.

In other words, since the amount of available energy in a system is not defined until one tries to get it out, and how we get it out determines how much there is, that we can only talk about the energy of an object or system, rather than how much heat is in it?

I’m not sure that I would say that the amount of available energy in a system is not defined since we should (give or take third-law limitations) be able to extract any energy above the ground-state energy. What I would say is that we can’t specify how much heat is in a body because you can extract energy as heat or work. Let’s go back to my monatomic gas. If we allow it to come to equilibrium (which might take a long time, but we’re patient), the translational energy will increase and the electronic energy decrease until both obey a Boltzmann distribution. At that point, I will see that that system has a single, well-defined temperature larger than the original temperature T. (This is assuming that my container is insulated and rigid.) I could extract energy from this system by putting it in thermal contact with another system at a lower temperature. Since p = nRT/V, the pressure will also have gone up, so I could also get out some of the energy I put in as work by allowing the gas to escape into a piston and using the motion of the piston to push something (e.g. turn a motor). Note that I could have achieved the same effect by heating the gas with a torch. Just because I put heat into a system doesn’t mean that I can only get heat out. Really, I’ve just increased the molecular energy, whose mean value (assuming there is a single temperature, as discussed above) is related to the observable temperature.

I tend to think about thermodynamic properties in very concrete terms (what the atoms are doing), which probably hinders my understanding of some of these concepts.

I always tell my students that it doesn’t even matter if atoms exist in pure, classical thermodynamics. In fact, Ernst Mach, who wrote some very influential material on thermodynamics, went to his grave saying that atoms were an unnecessary theoretical construct. Now, that being said, atomic theory enriches our understanding of what U and S mean since it allows us to talk in reasonably concrete terms about where the energy has gone, or what exactly it is that S measures. I’m therefore not sure that it’s your “concrete” thinking that is getting in the way. Rather, it’s the language we use to describe heat that is the problem. This language puts incorrect images into our heads that are incredibly difficult to get rid of. Worse yet, we may have had educational experiences that reinforced those images rather than pointing out their rather severe limitations.

Canada-Wide Science Fair

This week, the Canada-Wide Science Fair (CWSF) was held in Lethbridge, and I had the very good fortune to be asked to serve as Deputy Chief Judge. It was what you might call “fun work”. A lot of late nights, but a huge reward at the end when it became clear that we had run a smooth event and that everyone, judges and finalists alike, were leaving the judging floor happy. A lot of the credit for that has to go to the long-term CWSF organizers, a group of veteran judges who call themselves the CWSF Judging Advisory Panel. These folks really know how to run a large-scale Science Fair!

Other people did most of the media work, but I was asked to do one interview today by the local CTV station. Here is the news item that resulted from that interview:

http://www.youtube.com/watch?v=JsJM79LXCuw&feature=youtu.be

The Pope is a chemist!

As I was listening to a Jesuit priest comment on the election of Pope Francis yesterday, my ears really perked up when it was mentioned that the Pope started out as a chemist. Well, after a little bit of digging, it turns out that he graduated from an industrial secondary school in Argentina as a chemical technician, probably roughly equivalent to a college diploma in chemical technology here. So it turns out you can start out as a chemical technician and end up Pope. Imagine where a bachelor’s degree in chemistry could take you!

Seriously, you really should think about it. There are wonderful careers to be had in chemistry. An article in Canadian Business last year ranked chemistry 5th among professions in terms of demand and recent salary growth. So you may not end up being Pope (or Chancellor of Germany—Angela Merkel has a doctorate in quantum chemistry), but you ought to be able to make a very good living as a chemist.

Naturwissenschaften’s 100 most cited papers (continued)

One very interesting area of application for ideas and techniques from nonlinear dynamics is the study of biological cycles. The circadian rhythm, the internal 24 hour clock that a very large number of organisms have, has been a particular object of study over the years. Naturwissenschaften‘s list of 100 most cited papers includes a classic paper by Aschoff and Pohl on phase relations between a circadian rhythm and a zeitgeber (a stimulus that entrains the clock). The most prominent zeitgeber is of course the day-night cycle, but other stimuli can reset your circadian clock, including meal times and social interaction. In this particular paper, the authors examine the relationship between the circadian phase (e.g. the time of maximum observed activity relative to start of day) and the day length. Studies like these often use ideas from nonlinear dynamics on the entrainment of oscillators to derive insights into the workings of the clock based on how the phase changes as the difference between the natural frequency of the clock and the entraining (zeitgeber) frequency increases. In this case however, the authors focused on quantitative differences between the phase responses of different groups of organisms. We now know that there are several, evolutionarily distinct circadian oscillators operating in different groups of organisms to which the results of Aschoff and Pohl could likely be correlated.

Coupled oscillators are a recurring theme in nonlinear dynamics, and the Naturwissenschaften list also includes a paper by Hübler and Lüscher on the possibility of controlling one oscillator using a driving signal derived from a second oscillator. Although this is very much a fundamental study, this kind of work has found a number of applications over the years. I have already mentioned the use of such studies to understand biological oscillators. Coupled oscillators show up all over the place, both in natural and in engineered systems. One example that has been the focus of a lot of research is the use of coupled oscillators in secure communications. The problem here is that you want an authorized receiver to get your message, but you don’t want anyone else to be able to eavesdrop. I’m not sure what the current status of this research is, but there have been a number of proposals over the years to use coupled chaotic oscillators for this purpose. The original idea was (relatively) simple: If two chaotic oscillators have the same parameters, they can be made to synchronize by introducing a driving signal that increases with the difference between the transmitted signal and the computed signal at the receiver. Even small differences in parameters are enough to ruin the synchronization because of the sensitive dependence property of chaotic systems. If you add a low-amplitude signal to the transmitted signal, the receiver will still synchronize to the transmitted chaotic “carrier”. The computed chaotic trajectory at the receiver can be subtracted from the incoming signal, the difference then being the superimposed message. The key to make this work is to share a set of parameters for the chaotic system via a private channel. Easy in principle, but there are lots of technical conditions that have to be met to make this work, and lots of variations to be explored to find the most secure means of encoding the message within the transmitted signal.

The control of a process by an oscillator sometimes also has spatial manifestations. An example of this is provided in the paper on slime-mold aggregation by Gerisch in the Naturwissenschaften top-100 list. When they are well fed, slime molds live as single cells. Starve them, and they aggregate, form a fruiting body, and disperse spores. How do they know where to go during the aggregation process? The answer turns out to involve periodic cyclic AMP (cAMP) signaling. Starving cells put out periodic pulses of cAMP. The cells don’t all signal at the same rate, and they tend to synchronize to and move toward the fastest signaler. Note again the importance of coupled oscillators: This works in part because the cells “listen” to each other’s cAMP signals and adjust the frequency of their own oscillator to match the fastest frequency they “hear”.

Well, those are the things that caught my attention in the Naturwissenschaften list. There is lots of other wonderful stuff in there. I would love to hear what caught everyone else’s fancy.

100 most cited papers from Naturwissenschaften

The journal Naturwissenschaften turns 100 this year. Naturwissenschaften translates as “The Science of Nature”. It’s a journal that publishes papers in all areas of the biological sciences, broadly conceived. As many other journals have done, Naturwissenschaften is celebrating its 100th anniversary by posting a list of its 100 most cited papers. As with all such lists, especially with generalist journals like this one, what you find interesting may be different from what I find interesting, so it’s worth taking a look at the list yourself. However, if you’re reading this blog, perhaps we share some interests.

The first thing I noticed was that the list contained several of Manfred Eigen’s papers on biological self-organization, including his classic papers on the hypercycle. These papers were intended to address the problem of how biological organisms may have gotten started. The emphasis of this work tended to be on self-replicating molecular systems, such as the hypercycle, which is a family of models consisting of networks of autocatalytic units coupled in a loop. I’m not sure how large a contribution these papers made to the problem of the origin of life, but they sure caught people’s imaginations when they were written, and they led to interesting questions about the dynamics of systems with loops which are still being actively studied. If you have never read anything about hypercycles and have an interest either in theories on the origin of life or in nonlinear dynamics, you should track down these papers and read them. They will likely seem a little dated—they were written in the 1970s—but I think they’re still interesting.

Also near the top of the list, we see a paper by Karplus and Schulz on the “Prediction of chain flexibility in proteins”. Protein dynamics is all the rage these days. Everybody wants to think about how their favorite protein moves. This wasn’t always so. In the 1980s when this paper was published, we were starting to see a steady flow of high-quality x-ray protein structures. People were making very good use of these structures to understand protein function, and of course that is still the case. However, there was a tendency for biochemists back then to think of protein structure as an essentially static thing. This tendency was so pronounced that I remember attending a seminar in the mid-1990s at which the speaker made a point of talking about how cool it was that part of his enzyme could be shown to have a large-scale motion as part of its working cycle! The Karplus and Schulz paper therefore has to be understood in this context. At the time it was written, it wasn’t so easy to recognize flexible parts of proteins, and there was a lot of skepticism that flexibility was important to protein function. Needless to say, things have changed a lot.

The Naturwissenschaften list also includes a paper by Bada and Schroeder on the slow racemization of amino acids and its use for dating fossils. Living organisms mostly use the L isomers of the amino acids. Over time though, amino acids tend to racemize to a mixture of the L and D forms. While an organism is alive, this process is, in most tissues, completely insignificant since proteins are turned over relatively rapidly. After an organism dies, turnover starts, and we can use the D to L ratio to date fossil materials. There are other interesting applications of this technique, including its use to determine the ages of recently deceased organisms using the eye lens nucleus, a structure formed in utero. I wrote about this dating technique in my book.

I’ll come back to Naturwissenschaften‘s list in a few days. There are a number of other papers in there that I think are interesting.

Who I am and why you might read this blog

Welcome to my blog!

I teach chemistry and do research in mathematical biology at the University of Lethbridge. I have also written a textbook entitled A Life Scientist’s Guide to Physical Chemistry, published by Cambridge University Press. Here’s a picture of the very pretty cover that Cambridge designed for me:

Guide

 

This won’t be a blog that gets updated every day. Rather, I’m going to make occasional posts here about things that I think are worthy of public comment and where I think I have something interesting and unique to say. Most of the posts will revolve around physical chemistry, nonlinear dynamics, stochastic systems, and biochemistry, my major teaching and research interests. If these topics interest you, too, you might want to read this blog. I may from time to time delve into other topics, and maybe even hazard the occasional political opinion. Whether my posts outside of my main area of expertise will be of any interest will be up to you to decide.