Oxygen Fugacity

Used to express the idealized partial pressure of a gas, in this case oxygen, in a nonideal mixture. Oxygen fugacity (ƒO2) is a measure of the partial pressure of gaseous oxygen that is available to react in a particular environment (e.g. protoplanetary disk, Earth’s magma, an asteroid’s regolith, etc.) and corrected for its nonideal behavior, like when at high pressure and/or temperature, or even when bound within a mineral.

One may think of oxygen fugacity as a way to determine the amount of oxygen required as a gas (O2) under conditions where gaseous oxygen is not available like for example in a magma. In a magma, oxygen is available as a molecule like SiO3, an oxygen ion, and even perhaps as O2 gas. Oxygen fugacity allows complex systems where oxygen is present in many forms to be described simply as an ideal gas thus simplifying discussions of chemical reactions.

Fugacity, as a general term, is a thermodynamic term for a substance’s potential to transfer from one environmental system to another. When the fugacity of chemical species (can be same or different) in two different systems are in unequal contact, the chemical species will be redistributed between the systems with net transfer of the chemical species from the higher fugacity system to the lower fugacity system. When the fugacities are equal, the net transfer is zero and equilibrium exists.1

The concept of fugacity was introduced by Lewis in 1908 and later accepted after publication of the textbook written by Lewis and Randall in 1923. In 1957, Eugster was the first to introduce the concept of gas fugacity in petrology. He experimented with the externally controlled oxygen fugacity and observed the influence of the oxidation potential on reactions giving rise to specific mineral assemblages, and designated the oxidation potential of oxygen as the partial oxygen pressure, p(O2), later renamed as the oxygen fugacity by Eugster and Wones in 1962.1

 

This entry was posted in . Bookmark the permalink.