Category Archives: Physical Chemistry

Entropy Calculation
Entropy Calculation for Ideal Gas

Reversible Change: For reversible expansion or Compression-

    \[{{q}_{\text{rev}}}=-w=RT\ln \frac{{{V}_{2}}}{{{V}_{1}}}\]

[using ΔU = Q + w]

    \[\left( w=-RT\ln \frac{{{V}_{2}}}{{{V}_{1}}} \right)\]

    \[\Delta {{S}_{\text{system}}}=\frac{{{q}_{\text{rev}}}}{T}=R\ln \left( \frac{{{V}_{2}}}{{{V}_{1}}} \right)\]

qrev  is heat exchanged reversible between the system and the surrounding at temp T.

    \[\Delta {{S}_{\text{surrounding}}}=\frac{-{{q}_{\text{rev}}}}{T}\]

    \[\Delta {{S}_{\text{total}}}=\Delta {{S}_{\text{sys}}}+\Delta {{S}_{\text{surr}}}=0\]

Irreversible Change:

Case I : Free expansion: The gas expands into a vacuum for this process.

w = 0,   q = 0

Since entropy is a state function, the entropy change of a system in going from volume V1 to V2 by any path will same as that of a reversible change.

Therefore, 

    \[\Delta {{S}_{\text{sys}}}=R\ln \frac{{{V}_{2}}}{{{V}_{1}}}\]

It is because from surrounding no heat is supplied.

    \[\Delta {{S}_{\text{surr}\text{.}}}=0\]

    \[\Delta {{S}_{\text{total}}}=\Delta {{S}_{\text{sys}}}+\Delta {{S}_{\text{surr}}}=R\ln \frac{{{V}_{2}}}{{{V}_{1}}}+0=R\ln \frac{{{V}_{2}}}{{{V}_{1}}}\]

Intermediate Expansion:

    \[\Delta {{S}_{\text{sys}}}=R\ln \frac{{{V}_{2}}}{{{V}_{1}}}=\frac{{{q}_{\text{rev}}}}{T}\]

 

Note: The entropy change of the system ΔSsys  will be same in all three process as it is state function.

                       qrev  = Heat absorbed by the system if the process had been carried out reversibly.

            For irreversible expansion, work is done against constant pressure.

                  qrev = -w = p external × V2 – V1

                       

    \[\Delta {{S}_{\text{surr}\text{.}}}=\frac{-{{q}_{\text{irr}}}}{T}=-\frac{{{p}_{\text{ext}}}({{V}_{2}}-{{V}_{1}})}{T}\]

            Since magnitude of work done in the intermediate expansion is smaller than that involved in reversible expansion,

            Therefore

    \[{{q}_{\text{irr}}}<\text{ }{{q}_{\text{rev}}}\]

    \[\Delta {{S}_{\text{Total}}}=\Delta {{S}_{\text{sys}}}+\Delta {{S}_{\text{surr}\text{.}}}=\frac{{{q}_{\text{rev}}}}{T}-\frac{{{q}_{\text{irr}}}}{T}=+ve\]

Entropy Change in Adiabatic Expansion or Compression of an Ideal Gas

Entropy Change of System: Since in adiabatic processes q = 0, therefore

                   

    \[\Delta {{S}_{\text{surr}}}=0\]

   

Since in an adiabatic process, both temperature an volume (or pressure) change, the expression for the molar entropy change as given by 

                 

    \[\Delta {{S}_{\text{sys}}}={{C}_{V.\text{m}}}\ln \frac{{{T}_{2}}}{{{T}_{1}}}+R\ln \frac{{{V}_{2}}}{{{V}_{1}}}\]

                 

    \[\Delta {{S}_{\text{sys}}}={{C}_{p.\text{m}}}\ln \frac{{{T}_{2}}}{{{T}_{1}}}+R\ln \frac{{{p}_{1}}}{{{p}_{2}}}\]

Now, we proceed to evaluate the change in total entropy for the following categories.

Reversible change: In this case

 

    \[\Delta {{S}_{\text{sys}}}=0\]

                      

Since for the adiabatic reversible process,

                   

    \[{{C}_{V.\text{m}}}\ln \frac{{{T}_{2}}}{{{T}_{1}}}=-R\ln \frac{{{V}_{2}}}{{{V}_{1}}}\]

  

            And 

    \[{{C}_{V.\text{p}}}\ln \frac{{{T}_{2}}}{{{T}_{1}}}=-R\ln \frac{{{p}_{2}}}{{{p}_{1}}}\]

  

            Thus

    \[\Delta {{S}_{\text{Total}}}=\Delta {{S}_{\text{sys}}}+\Delta {{S}_{\text{surr}}}=0+0=0\]

      

In the present case of expansion (or compression), the increase (or decrease) in entropy due to the volume change just compensate the decrease (or increase) in entropy due to the fall (or rise) in temperature.

 Irreversible change: In this case,

                             

    \[\Delta {{S}_{\text{sys}}}=R\ln \frac{{{V}_{2}}}{{{V}_{1}}}+{{C}_{V.\text{m}}}\ln \frac{T_{2}^{'}}{{{T}_{1}}}\]

     

Where T2 is the temperature of the system in the final state. 

                         

    \[\Delta {{S}_{\text{sys}}}=-{{C}_{V.\text{m}}}\ln \frac{{{T}_{2}}}{{{T}_{1}}}+{{C}_{V.\text{m}}}\ln \frac{T_{2}^{'}}{{{T}_{1}}}\]

    

Where T2 is the temperature, if the process was reversible,

            Since we know that

                   

    \[{{w}_{\text{irr}}}>{{w}_{\text{rev}}}\]

           (including the sign of w)

            And moreover for adiabatic change

                 ΔU = w     

            It follows that

                 

    \[\Delta {{U}_{\text{irr}}}>\Delta {{U}_{\text{rev}}}\]

      

            Or       

    \[{{C}_{V.\text{m}}}\left( T_{2}^{'}-{{T}_{1}} \right)<{{C}_{V.\text{m}}}\left( {{T}_{2}}-{{T}_{1}} \right)\]

            Remembering that T2 < T1 in the expansion process and  T2 < T1  in the compression process, we have T2 < T1                                         

            That is, the decrease in temperature during the irreversible expansion will be lesser and the increase in temperature during the irreversible compression will be larger that the corresponding change in the reversible process. This, we have

                       

    \[{{C}_{V.\text{m}}}\ln \frac{T_{2}^{'}}{{{T}_{1}}}>{{C}_{V.\text{m}}}\ln \frac{{{T}_{2}}}{{{T}_{1}}}\]

            Substituting this relation ; we get

                       

    \[\Delta {{S}_{\text{sys}}}=+ve\]

            And this

    \[\Delta {{S}_{\text{total}}}=\Delta {{S}_{\text{sys}}}+\Delta {{S}_{\text{surr}}}=\text{+}\,\text{ve}\]

            In the present case of expansion (or compression), the increase (or decrease) in entropy due to the  volume change is larger (or smaller) that the decrease (or increases in entropy due to the temperature change and hence ΔSsys  is positive.

 

Quantum Mechanics Series – 6 Planck’s Theory

Max Planck Quantum Theory
Plank’s story begins in the physics department of the Kaiser Wilhelm Institute in Berlin, just before the turn of the century.

Plank was repeatedly being confronted with reliable experimental data on Black-Body Radiation. He was trying to explain black body radiation, but could not explain with available theoretical tools at that time.

Planck was a very conservative member of the Prussian Academy, steeped in traditional methods of classical physics and a passionate advocate of thermodynamics. In fact, from his PhD thesis days in 1879 (the year Einstein was born) to his professorship at Berlin twenty years later, he had worked almost exclusively on problems related to the laws of thermodynamics. He believed that the Second Law, concerning entropy, went deeper and said more than was generally accepted.

Planck was attracted by the absolute and universal aspects of the black-body problem. Plausible arguments showed that at equilibrium, the curve of radiation intensity versus frequency should not depend on the size or shape of the cavity or on the materials of its walls. The formula should contain only the temperature, the radiation frequency and one or more universal constants which would be the same for all cavities and colours.
Finding this formula would mean discovering a relationship of quite fundamental theoretical interest.
1. This radiation law, whenever it is found, will be independent of special bodies and substances and will retain its importance for all times and cultures… even for non-terrestrial and non-human ones.

History has proved Planck’s insight to be more profound

than even he thought. In 1990, scientists using the COBE satellite measured the background radiation at the edge of the universe (i.e left over from the Big Bang), and found a perfect fit to his Black-body radiation Law.

Pre-Atomic Model of Matter
Planck knew the measurements by his friends Heinrich Rubens and Ferdinand Kurlbaum were extremely reliable.
Planck’s oscillators in the walls of the cavity


Planck started by introduced the idea of a collection of electric oscillators in the walls of the cavity, vibrating back and forth under thermal agitation.
(*Note! Nothing was known about atoms.)
Planck assumed that all possible frequencies would be

present. He also expected the average frequency to increase at higher temperatures as heating the walls caused the oscillators to vibrate faster and faster until thermal equilibrium was reached.

The electromagnetic theory could tell everything about the emission, absorption and propagation of the radiation, but nothing about the energy distribution at equilibrium. This was a thermodynamics problem.
Planck made certain assumptions, relating the average energy of the oscillators to their entropy, thereby obtaining a formula for the intensity of the radiation which he hoped would agree with the experimental results.

Planck tried to alter his expression for the entropy of the radiation by generalizing it, and eventually arrived at a new formula for the radiation intensity over the entire frequency range.

The constants C1 and C2 are numbers chosen by Planck to make the equation fit the experiments.
Among those present at the historic seminar was

Heinrich Rubens. He went home immediately to compare his measurements with Planck’s formula. Working through the night, he found perfect agreement and told Plank early next morning.

Planck had found correct formula for the radiation law. Fine. But could he now use the formula to discover the underlying physics ?

Planck’s Predicament
1. …..From the very day I formulated the radiation law, I began to devote myself to the task of investing it with true physical meaning.
2. After trying every possible approach using traditional classical applications of the laws of thermodynamics, I was desperate.
3. I was forced to consider the relation between entropy and probability according to Boltzmann’s ideas. After some of the most intense weeks of my life, the light began to appear to be.

Boltzmann’s statistical version of the Second law based on probabilities seemed Planck’s only alternative. But he rejected the underlying assumption of Boltzmann’s approach which allows the second law to be violated momentarily during fluctuations.

S = k log W

(Boltzmann’s version of the second law of thermodynamics.)
Not once in any of the forty or so papers that Planck wrote prior to 1900 did he use, or even refer to, Boltzmann’s statistical formulation of the second Law!

Chopping Up the Energy
So, Planck applied three to Boltzmann’s ideas about entropy.
1. His statistical equation to calculate the entropy.
2. His condition that the entropy must be a maximum (i.e. totally disordered) at equilibrium.
3. His counting technique to determine the probability W in the entropy equation.
To calculate the probability of the various possible arrangements, Planck followed Boltzmann’s method of dividing the energy of the oscillators into arbitrarily small but finite chunks. So the total energy was written as E = N e where N is an integer and e an arbitrarily small amount of energy. e would eventually become infinitesimally small as the chunks became infinite in number, consistent with the mathematical procedure.

A Quantum of Energy
1. I found that I Had To Choose Energy units proportional to the oscillator frequency, namely e = h f, In Order To obtain the Correct form for the total energy. F is the Frequency and h is a constant which would eventually decrease to zero.
2. BUT THEN A REMARKABLE THING HAPPENED. IF I ALLOWED THE ENERGY CHUNKS TO GO TO ZERO AS THE PROCEDURE DEMANDED, THE GENERAL VALIDITY OF THE DERIVED EQUATION WAS DESTROYED. HOWEVER…
3. I NOTICED THAT IF A DID NOT REQUIRE THAT ENERGY OR h GO TO ZERO, I OBTAINED MY OWN EXACT RADIATION FORMULA….WHICH I KNEW WAS CORRECT.

Eureka! Planck had stumbled across a mathematical method which at last gave some theoretical basis for this experiment radiation law – but only if the energy is discontinuous.
Even though he had no reason whatsoever to propose such a notion, he accepted it provisionally, for had nothing better. He was thus forced to postulate that the quantity e = h f must be a finite amount and h is not zero.
Thus, it this is correct, it must be concluded that it is not possible for an oscillator to absorb and emit energy in a continuous range. It must gain and lose energy discontinuously, in small indivisible units of e = h f, which Planck called “energy quanta”.


Now you can see why the classical theory failed in the high frequency region of the Black-Body Curve. In this region the quanta are so large (e = h f) that only a Few vibration modes are excited.
With a decreasing number of modes to excite, the oscillators are suppresses and the radiation frequency end. The ultraviolet catastrophe does not occur.

Planck’s quantum relation thus inhibits the equipartition of energy and not all modes have the same total energy. This is why we don’t get sunburn from a cup of coffee. (Think about it!)

The classical approach of Rayleigh-Jeans works fine at low frequencies, where all the available vibrational modes can be excited. At high frequencies, even though plenty of modes of vibration are possible (recall it’s easier of stuff short waves into a box). Not many are excited because it costs too much energy to make a quantum at a high frequency since e = h f.

During his early morning walk on 14 December 1900. Planck told his son that he may have produced a work as important as that of Newton. Later that same day. He presented his result to the Berlin Physical Society signaling the birth of quantum physics.

It had taken him less than two months to find an explanation for his own black-body radiation formula. Ironically, the discovery was accidental, caused by an incomplete mathematical procedure. An ignominious start to one of the greatest revolutions in the history of physics !
From this start would come an understanding of why statistical rules must be used for atoms, why atoms don’t glow all the time and why atomic electrons don’t spiral into the nucleus.

In early 1901, the constant h – today called Planck’s constant – appeared in print for the first time. The number is small –
h = 0.000 000 000 000 000 000 000 000 006 626
-but it is not zero! If it were, we would never be able to sit in front of a fire. In fact, the whole universe would be different. Be thankful for the things in life.
Surprisingly, in spite of the important and revolutionary aspects of the black-body formula, it did not draw much attention in the early years of the 20th century. Even more surprisingly, Planck himself was not convinced of its validity.

I was so sceptical of the universality of boltzmann’s entropy law that I spent years trying to explain my results in a less revolutionary way.
Now of the second experiment which could not be explained by classical physics. It is more simple, yet inspired a more profound explanation.

Quantum Mechanics Series -5 : Thermal Equilibrium and Fluctuations

The Thirty Year War (1900−30) – Quantum Physics Versus Classical Physics

There were three critical experiments in the pre-quantum era which could not be explained by a straightforward application of classical physics.
Each involved the interaction of radiation and matter as reported by reliable, experimental scientists.

The measurements were accurate and reproducible, yet paradoxical… the kind of situation a good theoretical physicist would die for.
We will describe each experiment step-by-step, pointing out the crisis engendered and the solution advanced by Max Planck, Albert Einstein and Niels Bohr respectively.

In putting forward their solution, these scientists made the first fundamental contributions to a new understanding of nature. Today the combined work of these three men, culminating in the Bohr model of the atom in 1913, is known as the Old Quantum Theory.

Black-Body Radiation
When an object is heated, it emits radiation consisting of electromagnetic waves, i.e. light with a broad range of frequencies.

1. Measurements made on the radiation escaping from a small hole in a closed heated oven – which in Germany we call a cavity – shows that the intensity of the radiation varies very stronger with the frequency of the radiation.

The dominant frequency shifts to a higher value as the temperature is increased, as shown in the graph drawn from measurements made in the late 19th century.


A black-body is a body that completely absorbs all the electromagnetic radiation failing on it.

Inside a cavity the radiation has nowhere to go and is continuously being absorbed and re-emitted by the walls. Thus, a small opening will give off radiation emitted by the walls, not reflected, and thus is characteristic of the black body.
When the oven is only just warm, radiation is present but we can’t see it because it does not stimulate the eye. As it gets hotter and hotter, the frequencies reach the visible range and the cavity glows red like a heating ring on an electric cooker.

This is how early potters determined the temperature inside their kilns. They would notice the color of fire where pots are heated and thr color gave them idea of temperature. Already in 1792, the famous porcelain maker Josiah Wedgwood had noted that all bodies become red at the same temperature.

In 1896, a friend of Planck’s Wilhelm Wien, and others in the Berlin Reichsanstalt (Bureau of Standards) physics department put together an expensive empty cylinder of porcelain and platinum.
At Berlin’s Technische Hochschule, another of Planck’s close associates, Heinrich Rubens, operated a different oven.
These radiation curves – one of the central problem of theoretical physics in the late 1890s – were shown to be very similar to those calculated by Maxwell for the velocity (i.e. energy) distribution of heated gas molecules in a closed container.

Paradoxical Results
Could this black-body radiation problem be studied in the same way as Maxwell’s ideal gas… electromagnetic waves (instead of gas molecules) bouncing around in equilibrium with the walls of a closed container?
Wien derived a formula, based on some dubious theoretical arguments which agreed well with published experiments, but only at the high frequency part of the spectrum.
The English classical physicist Lord Rayleigh (1842−1919) and Sir James Jeans (1877−1946) used the same theoretical assumptions as Maxwell had done with his kinetics theory of gases.
The equation of Rayleigh and Jeans agreed well at low frequencies but they got a real shock at the high frequency region. The classical theory predicted an infinite intensity for the ultraviolet region and beyond, as shown in the graph. This was dubbed the ultraviolet catastrophe.
What does this experimental result actually mean and What Went Wrong ?
The Rayleigh-Jeans result is clearly wrong, otherwise anyone who looked into the cavity would have eyeballs burned out.

This ultraviolet Catastrophe became a serious Paradox for classical physics.
If Rayleingh and Jeans were right, it would be dangerous for us even to sit in front of a fireplace.

If classical physicists had their way, the romantic glow of the embers would soon turn into life-threatening radiation. Something had to be done!

The Ultraviolet Catastrophe
Everyone agreed that Rayleigh and Jeans’ method was sound, so it is instructive to examine what they actually did and why it didn’t work.
1. We applied the statistical physics method to the waves by Analogy with Maxwell’s gas particles using the equipartition of energy, i.e. we assumed that the total energy of radiation is distributed equally among all possible vibration frequencies.
2. But there is one big difference in the case of waves. There is no limit on the number of modes of vibration that can be excited…
3. …Because It’s easy to fit more and more waves into the container at higher and higher frequencies (i.e. the wavelengths get smaller and smaller).
4. Consequently, the amount of radiation predicated by the theory is unlimited and should keep getting stronger and stronger as the temperature is raised and the frequency increases.
5. No wonder it was known as the ultraviolet catastrophe.

Quantum Mechanics Series – 4 : The Existence of Atoms

The Existence of Atoms

India Philosopher “Kanad” before 600 B.C. said that each matter consists of small particles which are called “Kan”. He gave those small particle his own name.

A Greek philosopher named Democritus (c. 460−370 B.C.) also first proposed the concept of atoms (means “indivisible” in Greek).

The idea was questioned by Aristotle and debated for hundreds of years before the English chemist John Dalton (1766 – 1844) used the atomic concept to predict the chemical properties of elements and compounds in 1806.

But it was not until a century later that a theoretical calculation by Einstein and experiments by the Frenchman Jean Perrin (1870−1942) persuaded the sceptics to accept the existence of atoms as a fact.

However, during the 19th century, even without physical proof of atoms, many theorists used the concept.

 

Averaging Diatomic Molecules

The Scottish physicist J.C. Maxwell, a confirmed atomist, developed his kinetic theory of gases in 1859.

 

This was qualitatively consistent with physical properties of gases, if we accept the notion that heating causes the molecules to move faster and bang into the container walls more frequently.

 

Maxwell’s theory was based on statistical averages to see if the macroscopic properties (that is, those properties that can be measured in a laboratory) could be predicted from a microscopic model for a collection model for a collection of gas molecules.

 

Maxwell made for assumptions :

Maxwell : gave distribution of velocity for gas particles

1. THE MOLECULES ARE LIKE HARD SPHERES WITH THEIR DIAMETERS MUCH SMALLER THAN THE DISTANCE BETWEEN THEM.

2. THE COLLISIONS BETWEEN MOLECULES CONSERVE ENERGY.

3. THE MOLECULES MOVE BETWEEN COLLISIONS WITHOUT INTERACTING AT A CONSTANT SPEED IN A STRAIGHT LINE.

 

 

 

 

This last assumption was the most unusual and revolutionary showing a great deal of physical insight by Maxwell.

It would be impossible by try to compute the individual motions of many particles. But Maxwell’s analysis, based on Newton’s mechanics, showed that temperature is a measure of the microscopic mean squared velocity of the molecules. 

The real importance of Maxwell’s theory is the prediction of the probable velocity distribution of the molecules, based on his model. In other words, this gives the range of velocities…how the whole collection deviates from the average.

Postulates of Maxwell Theory helps to calculate probability that a molecule chosen at random would have a particular velocity.

Maxwell velocity distribution curve:

This is the well known curve which physicists today call the Maxwell Distribution. It gives useful information about the billions and billions of molecules, even though the motion of an individual molecule can never be calculated. This is the use of probabilities when an exact calculation is impossible in practices.

 

 

 

 

Ludwig Boltzmann and Statistical Mechanics

In the 1870s, Ludwig Boltzmann (1844−1906) – inspired by Maxwell’s kinetic theory – made a theoretical pronouncement.

  • He presented a general probability distribution law called the canonical or orthodox distribution which could be applied to any collection of entities which have freedom of movement, are independent of each other and interact randomly.
  • He formalized the theorem of the equipartition of energy.

This means that the energy will be shared equally among all degree of freedom if the system reaches thermal equilibrium.

  • He gave a new interpretation of the Second Law.

 

When energy in a system is degraded (as Clausius said in 1850), the atoms in the system become more disordered and the entropy increases. But a measure of the disorder can be made. It is the probability of the particular system – defined as the number if ways it can be assembled from its collection of atoms.

More precisely, the entropy is given by :

          S = k Log W −−−−

Where k is a constant (now called Boltzamann’s constant) and W is the probability that a particular arrangement of atoms will occur. This work made Boltzmann the creator of statistical mechanics, a method in behavior of their constituent microscopic parts.

Quantum Mechanics Series- 3 : What is Thermodynamics

Quantum Mechanics 3

What is Thermodynamics ?

The word means the movement of heat, which always flows from a body of higher temperature to a body of lower temperature, until the temperatures of the two bodies are the same. This is called thermal equilibrium.

Heat is correctly described as a form vibration…

 

The First Law of Thermodynamics

 

Steam Engines

James Watt (1736 – 1819), A Scot , who had built a working steam engine in 19th century.  

 

 

 

 

Soon after, the son of a Manchester brewer, James Prescott Joule (1818−19), showed that a quantity of heat can be equated to a certain amount of mechanical work.

Then somebody said…. “since heat can be converted into work, it must be a form of energy” (the Greek word energy means “containing work”) But it wasn’t until 1847 that a respectable academic scientist, Hermann von Helmholtz (1821-94), stated…..

Helmholtz

WHENEVER A CERTAIN AMOUNT OF ENERGY DISAPPEARS IN ONE PLACE, AN EQUIVALENT AMOUNT MUST APPEAR ELSEWHERE IN THE SAME SYSTEM.

 

 

 

 

 

 

This is called the law of the conservation of energy.  It remains a foundation of modern physics, unaffected by modern theories.

 

Rudolf Clausius: Two Laws

In 1850, the German physicist Rudolf Clausius (1822-88) published a paper in which he called the energy conservation law The First Law of Thermodynamics. At the same time, he argued that there was a second principle of thermodynamics in which there is always some degradation of the total energy in the system, some non-useful heat in a thermodynamic process.

Clausius introduced a new concept called entropy – defined in terms of the heat transferred from one body to another.

Entropy is measurement of disorderness of any system. The entropy of an isolated system always increases, reaching a maximum at thermal equilibrium, i.e. when all bodies in the system are at the same temperature.

Quantum Mechanics Series – 2: Solvay Conference 1927 – Formulation of Quantum Theory

The Solvay Conference 1927 – Formulation of Quantum Theory

A few years before the outbreak of World War I, the Belgain industrialist Ernest Solvay (1838-1922) sponsored the first of a series of international physics meetings in Brussels. Attendance at these meeting was by special invitation, and participants – usually limited to about 30 – were asked to concentrate on a pre-arranged topic.

The first five meeting held between 1911 and 1927 chronicled in a most remarkable way the development of 20th century physics. The 1927 gathering was devoted to quantum theory and attended by no less than nine theoretical physicists who had made fundamental contributions to the theory. Each of the nine would eventually be awarded a Nobel Prize for this contribution.

This photograph of the 1927 Solvay Conference is a good starting point for introducing the principal players in the development of the most modern of all physical theories. Future generations will marvel at the compressed time scale and geographical proximity which brought these giants of quantum physics together in 1927.

 

There is hardly and period in the history of science in which so much has been clarified by so few in so short a time.

Look at the sad-eyed Max Planck (1858−1947) in the front row next to Marie Curie (1867−1934). With his hat and cigar, Planck appears drained of vitality, exhausted after years of trying to refute his own revolutionary ideas about matter and radiation.

 

A few year later in 1905, a young patent clerk in Switzerland named Albert Einstein (1879−1955) generalized Planck’s notion.

 

That’s Einstein in the front row centre, sitting stiffly in his formal attire. He had been brooding for over twenty years about the quantum problem without any real insight since his early 1905 paper. All the while, he continued to contribute to the theory’s development and endorsed original ideas of others with uncanny confidence. His greatest work – the General Theory of Relativity – which had made him an international celebrity, was already a decade behind him.

 

In Brussels, Einstein had debated the bizarre conclusions of the quantum theory with its most respected and determined proponent, the “great Dane” Niels Bohr (1885-1962). Bohr – more than anyone else – would become associated with the struggle to interpret and understand the theory. At the far right of the photo, in the middle row, he is relaxed and confident – the 42 year old professor at the peak of this powers.

In the back row behind Einstein, Erwin Schrodinger (1887−1961) looks conspicuously casual in his sports jacket and how tie. To his left but one are the “young Turks”, Wolfgang Pauli (1900−58) and Werner Heisenberg (1901−76) – still in their twenties – and in front of them, Paul Dirac (1902−84), Louis de Broglie (1892−1987), Max Born (1882−1970) and Bohr. These men are today immortalized by their associate with the fundamental properties of the microscopic world: the Schrodinger wave equation; the Pauli exclusion principle ; the Heisenberg uncertainty relation, the Bohr Atom…. and so forth.

They were all there – from Planck, the oldest at 69 years, who started it all in 1900 – to Dirac, the youngest at 25 years, who completed the theory in 1928.

The day after this photograph was taken – 30 October 1927 – with the historic exchanges between Bohr and Einstein still buzzing in their minds, the conferees boarded trains at the Brussels Central Station to return to Berlin, Paris, Cambridge, Gottingen, Copenhagen, Vienna and Zurich.

They were taking with them the most bizarre set of ideas ever concocted by scientists. Secretly, most of them probably agreed with Einstein that this madness called the quantum theory was just a step along the way to a more complete theory and would be overthrown for something better, something more consistent with common sense.

 

But how did the quantum theory come about? What experiments compelled these most careful of men to ignore the tenets of classical physics and propose ideas about nature that violated common sense ?

Before we study these experimental paradoxes, we need some background in thermodynamics and statistics which are fundamental to the development of quantum theory.

 

Quantum Mechanics Series – 1: introduction

Quantum Mechanics series Chapter 1

Quantum Mechanics is one of difficult and interesting topics for students at higher studies level.  

Introducing quantum Theory…..

Just before the turn of the century, physicists were so absolutely certain of their ideas about the nature of matter and radiation that any new concept which contradicted their classical picture would be given little consideration. Scientist were considering that they know almost everything and not much is left to understand. 

     Isaac Newton

      James Clerk Maxwell

 

 

 

 

 

 

 

Not only was the mathematical formalism of Isaac Newton (1642-1727) and James Clerk Maxwell (1831-79) impeccable, but predictions based on their theories had been confirmed by careful detailed experiments for many years.  The age of Reason had become the age of certainty !

 

Classical Physicists

What is the definition of classical” ?

By classical is meant those late 19th century physicists nourished on an academic diet of Newton’s mechanics and Maxwell’s electromagnetism – the two most successful syntheses of physical phenomena in the history of thought.

Testing theories by observation had been the hallmark of good physics since Galileo (1564-1642). He showed how to devise experiments, make measurements and compare the results with the compare the results with the predictions of mathematical laws.

The interplay of theory and experiment is still the best way to proceed in the world of acceptable science. 

It’s All Proven (and Classical)…

During the 18th and 19th centuries. Newton’s laws of motion had been scrutinized and confirmed by reliable tests.

 

“Fill in the Sixth Decimal Place”

A classical physicist from Glasgow University, the influential Lord Kelvin (1824-1907), spoke of only two dark clouds on the Newtonian horizon.

In June 1894, the American Nobal Laureate, Albert Michelson (1852-1931), though he was paraphrasing Kelvin in a remark which he regretted for the rest of his life.

 

The Fundamental Assumptions of Classical Physics

Classical physicists had built up a whole series of assumptions which focused their thinking and made the acceptance of new ideas very difficult. Here’s a list of what they were sure of about the material world.

(1)     The universe was like a giant machine set in a framework of absolute time and space. Complicated movement could be understand as a simple movement of the machine’s inner parts, even if these parts can’t be visualized.

(2)     The Newtonian synthesis implied that all motion had a cause. If a body exhibited motion, one could always figure out what was producing the motion. This is simply cause and effect, which nobody really questioned.

(3)     If the state of motion was known at one point – say the present – it could be determined at any other point in the future or even the past. Nothing was uncertain, only a consequence of some earlier cause. This was determinism.

(4)     The properties of light are completely described by Maxwell’s electromagnetic wave theory and confirmed by the interference patterns observed in a simple double-slit experiment by Thomas Young in 1802.

(5)     There are two physical models to represent energy in motion : one a particle, represented by an impenetrable sphere like a billiard ball, and the other a wave, like that which rides towards the shore on the surface of the ocean. They are mutually exclusive, i.e. energy must be either one or the other.

(6)     It was possible to measure to any degree of accuracy the properties of a system, like its temperature or speed. Simply reduce the intensity of the observer’s probing or correct for it with a theoretical adjustment. Atomic systems were thought to be no exception.

 

Classical physicists believed all these statements to be absolutely true. But all six assumptions would eventually prove to be in doubt. The first to know this were the group of physicists who met at the Metropole Hotel in Brussels on 24 October 1927.

 

Effect of Temperature over rate of a Chemical Reaction

Factors affecting Reaction rate

The rate of a chemical reaction depends on the rate of encounter between the molecules of the reactants which in turn depends on the following things.

(1)     Effect of temperature on reaction rate : The rate of chemical reaction generally increases on increasing the temperature.

(2)     Nature of reactants : (i) Reactions involving polar and ionic substances including the proton transfer reactions are usually very fast. On the other hand, the reaction in which bonds is rearranged, or electrons transferred are slow.

(ii)    Oxidation-reduction reactions, which involve transfer of electrons, are also slow as compared to the ionic substance.

(iii)   Substitution reactions are relatively much slower.

(3)     pH of the medium : The rate of a reaction taking place in aqueous solution often depends upon the  ion concentration. Some reactions become fast on increasing the H+ ion concentration while some become slow.

(4)     Concentration of reactants : The rate of a chemical reaction is directly proportional to the concentration of the reactants means rate of reaction decreases with decrease in concentration.

(5)     Surface area of reactant : Larger the surface area of reactant, the probability of collisions on the surface of the reactant particles by the surrounding  molecules increases and thus rate of reaction increases.

(6)     Presence of catalyst : The function of a catalyst is to lower down  the activation energy. The greater the decrease in the activation energy caused by the catalyst, higher will be the reaction rate. In the presence of a catalyst, the reaction follows a path of lower activation energy. Under this condition, a large number of reacting molecules are able to cross over the energy barrier and thus the rate of reaction increases.  Fig. shows how the activation energy is lowered in presence of a catalyst.

(7)       Effect of sunlight :  There are many chemical reactions whose rate are influenced by radiations particularly by ultraviolet and visible light. Such reactions are called photochemical reactions. For example, Photosynthesis, Photography, Blue printing, Photochemical synthesis of compounds etc.

H2 + Cl2  \underrightarrow { \quad Sunlight\quad (hv)\quad } 2HI : The radiant energy initiates the chemical reaction by supplying the necessary activation energy required for the reaction.

 

Rate law, Law of mass action and Rate constant

(1)     Rate law : The actual relationship between the concentration of reacting species and the reaction rate is determined experimentally and is given by the expression called rate law.

For any hypothetical reaction, aA + bB → cC + dD

Rate law expression may be, rate = k[A]a[B]b

Where a and b are constant numbers or the powers of the concentrations of the reactants  and  respectively on which the rate of reaction depends.

(i)    Rate of chemical reaction is directly proportional to the concentration of the reactants.

(ii)     The rate law represents the experimentally observed rate of reaction, which depends upon the slowest step of the reaction.

(iii)    Rate law cannot be deduced from the relationship for a given equation. It can be found by experiment only.

(iv)    It may not depend upon the concentration of species which do not appear in the equation for the over all reaction.

(2)       Law of mass action : (Guldberg and Wage 1864) This law relates rate of reaction with active mass or molar concentration of reactants. According to this law, “At a given temperature, the rate of a reaction at a particular instant is proportional to the product of the reactants at that instant raised to powers which are numerically equal to the numbers of their respective molecules in the stoichiometric equation describing the reactions.”

Active mass = Molar concentration of the substance

=  \frac { Number\quad of\quad gram\quad moles\quad of\quad the\quad substance }{ Volume\quad in\quad litres } =\frac { W/m }{ V } =\frac { n }{ V }

Where W = mass of the substance, m is the molecular mass in grams, ‘n’ is the number of g moles and V is volume in litre.

Consider the following general reaction,

m1A1 + m2A2 + m3A3 → Products

Rate of reaction  [A1]m1[A]m2[A3]m3

(3)     Rate constant : Consider a simple reaction, A → B. If C4 is the molar concentration of active mass of A at a particular instant, then,  \frac { dx }{ dt } ∝ CA  or   \frac { dx }{ dt } = kCA ; Where k is a proportionality constant, called velocity constant or rate constant or specific reaction rate constant.

At a fixed temperature, if CA = 1, then Rate =  \frac { dx }{ dt } = k

“Rate of a reaction at unit concentration of reactants is called rate constant.”

(i)      The value of rate constant depends on, Nature of reactant, Temperature and Catalyst

(It is independent of concentration of the reactants)

(ii)     Unit of rate constant :  Unit of rate constant = \left[ \frac { litre }{ mol } \right] ^{ 1-n } × sec-1  or   \left[ \frac { mol }{ litre } \right] ^{ 1-n } × sec-1

Where n = order of reaction

Difference between Rate law and Law of mass action
Rate law Law of mass action
It is an experimentally observed law. It is a theoretical law.
It depends on the concentration terms on which the rate of reaction actually depends It is based upon the stoichiometry of the equation
Example for the reaction, aA + bB → Products Example for the reaction, aA + bB → Products
Rate = k[A]m[B]n Rate = k[A]a[B]b

 

Difference between Rate of reaction and Rate constant

Rate of reaction Rate constant
It is the speed with which reactants are converted into products. It is proportionality constant.
It is measured as the rate of decrease of the concentration of reactants or the rate of increase of concentration of products with time. It is equal to rate of reaction when the concentration of each of the reactants is unity.
It depends upon the initial concentration of the reactants. It is independent of the initial concentration of the reactants. It has a constant value at fixed temperature.

 

Acid-Base & Solvents

Acids,  Bases and Solvent Systems.

Hands up all those who would like to see the Bartley Lab adorned with pH 1-14 indicator colours.

 

Acids

An acid (from the Greek oxein then Latin acidus/acére meaning sour) is a chemical substance whose aqueous solutions were characterized by a sour taste, the ability to turn blue litmus red, and the ability to react with bases and certain metals (like calcium) to form salts. Aqueous solutions of acids have a pH smaller than 7. The lower the pH, the higher the acidity and thus the higher the concentration of hydrogen ions in the solution (using the Arrhenius or Brønsted-Lowry definition).

Some notes on acids-bases, pH and the use of logarithms in calculations are available.

There are a number of common definitions for acids, for example, the ArrheniusBrønsted-Lowry, and the Lewis definition. The Arrhenius definition defines acids as substances which increase the concentration of hydrogen ions (H+), when dissolved in water. The Brønsted-Lowry definition is an expansion of this and defines an acid as a substance which can act as an H+ donor. By this definition, any compound which can be easily deprotonated can be considered an acid. Examples include alcohols and amines which contain O-H or N-H fragments. A Lewis acid is a substance which can accept a pair of electrons to form a covalent bond. Examples of Lewis acids include all metal cations, and electron-deficient molecules such as boron trifluoride and aluminium trichloride.

Common examples of acids include hydrochloric acid (a solution of hydrogen chloride gas in water, this is the acid found in the stomach that activates digestive enzymes), acetic acid (vinegar is a dilute solution, generally under 5%), sulfuric acid (used in wet-cell car batteries), and tartaric acid (a solid used in baking). As these examples show, acids can be solutions or pure substances, and can be derived from solids, liquids, or gases.

HCl(aq) + NaOH(aq) ⇄ NaCl + H2O
HOAc(aq) + NaOH(aq) ⇄ NaOAc + H2O
H2SO4(aq) + 2NaOH(aq) ⇄ Na2SO4 + 2H2O
HO2CCH(OH)CH(OH)CO2H(aq) + 2NaOH(aq) ⇄ Na2Tartrate + 2H2O

Bases

The “modern” concept of a base in chemistry, stems from Guillaume-François Rouelle who in 1754 suggested that a base was a substance which reacted with acids “by giving it a concrete base or solid form” (as a salt). In addition they gave aqueous solutions which were characterized as slippery to the touch, tasted bitter, changed the colour of indicators (e.g., turned red litmus paper blue), and promoted certain chemical reactions (base catalysis). Examples of bases are the hydroxides of the alkali and alkaline earth metals (NaOH, Ca(OH)2, etc.).

For a substance to be classified as an Arrhenius base, it must produce hydroxide ions in solution. In order to do so, Arrhenius believed the base must contain hydroxide in the formula. This made the Arrhenius model limited, as it did not readily explain the basic properties of aqueous solutions of ammonia (NH3.aq, often written as NH4OH to better fit the Arrhenius model) or its organic derivatives (amines). In the more general Brønsted-Lowry acid-base theory, a base is a substance that can accept hydrogen ions (H+). In the Lewis model, a base is an electron pair donor.

Oxides and Hydroxides

An early classification of substances arose from the differences observed in their solubility in acidic and basic solutions. This led to the classification of oxides and hydroxides as being either acidic or basic. Acidic oxides or hydroxides either reacted with water to produce an acidic solution or were soluble in aqueous base. Basic oxides and hydroxides either reacted with water to produce a basic solution or readily dissolved in aqueous acids. The diagram below shows there is strong correlation between the acidic or basic character of oxides (ExOy) and the position of the element, E, in the periodic table. 

Oxides of metallic elements are generally basic oxides, and oxides of nonmetallic elements acidic oxides. Take for example, the reactions with water of calcium oxide, a metallic oxide, and carbon dioxide, a nonmetallic oxide:

CaO(s) + H2O(l) → Ca(OH)2
CO2(g) + H2O(l) → H2CO3(aq)

Calcium oxide reacts with water to produce a basic solution of calcium hydroxide, whereas carbon dioxide reacts with water to produce a solution of carbonic acid.

 

There is a gradual transition from basic oxides to acidic oxides from the lower left to the upper right in the periodic table.

Basicity of the oxides increase with increasing atomic number down a group:

BeO < MgO < CaO < SrO < BaO

Note as well that acidity increases with increasing oxidation state of the element:

MnO < Mn2O3 < MnO2 < Mn2O7

in keeping with the increase in covalency.

Oxides of intermediate character, called amphoteric oxides, are located along the diagonal line between the two extremes. Amphoteric species are molecules or ions that can react as an acid as well as a base. The word has Greek origins, amphoteroi (άμφότεροι) meaning “both”. Many metals (such as copper, zinc, tin, lead, aluminium, and beryllium) form amphoteric oxides or hydroxides. Amphoterism depends on the oxidation state of the oxide.

For example, zinc oxide (ZnO) reacts with both acids and with bases:

In acid: ZnO + 2H+ → Zn2+ + H2O

In base: ZnO + 2OH + H2O→ [Zn(OH)4]2-

This reactivity can be used to separate different cations, such as zinc(II), which dissolves in base, from manganese(II), which does not dissolve in base.

Aluminium hydroxide is another amphoteric species:

As a base (neutralizing an acid): Al(OH)3 + 3HCl → AlCl3 + 3H2O
As an acid (neutralizing a base): Al(OH)3 + NaOH → Na[Al(OH)4]

Acid-Base theories and concepts

Arrhenius acids and bases 

Although the term proton is often used for H+, this should really be reserved for H (protium) not D (deuterium) or T (tritium). The more general term, hydron covers all isotopes of hydrogen.

The Swedish chemist Svante Arrhenius attributed the properties of acidity to hydrogen ions (H+) in 1884. An Arrhenius acid is a substance that, when added to water, increases the concentration of H+ ions in the water. Note that chemists often write H+(aq) and refer to the hydrogen ion when describing acid-base reactions, but the free hydrogen nucleus does not exist alone in water, it exists in a hydrated form which for simplicity is often written as the hydronium (hydroxonium) ion, H3O+. Thus, an Arrhenius acid can also be described as a substance that increases the concentration of hydronium ions when added to water. This definition stems from the equilibrium dissociation (self-ionization) of water into hydronium and hydroxide (OH) ions:

H2O(l) + H2O(l) ⇌ H3O+(aq) + OH(aq)

Kw is defined as [H+][OH] and the value of Kw varies with temperature, as shown in the table below where at 25 °C Kw is approximately 1.0*10-14, i.e. pKw= 14.

Water temperature Kw / 10-14 pKw
0 °C 0.112 14.95
25 °C 1.023 13.99
50 °C 5.495 13.26
75 °C 19.95 12.70
100 °C 56.23 12.25

 

In pure water the majority of molecules are H2O, but the molecules are constantly dissociating and re-associating, and at any time a small number of the molecules (about 1 in 107) are hydronium and an equal number are hydroxide. Because the numbers are equal, pure water is neutral (not acidic or basic) and has an electrical conductivity of 5.5 microSiemen, μS m-1. For comparison, sea water’s conductivity is about one million times higher, 5 S m-1.

An Arrhenius base, on the other hand, is a substance which increases the concentration of hydroxide ions when dissolved in water, hence decreasing the concentration of hydronium ions.

To qualify as an Arrhenius acid, upon the introduction to water, the chemical must either cause, directly or otherwise:

  • an increase in the aqueous hydronium concentration, or
  • a decrease in the aqueous hydroxide concentration.

Conversely, to qualify as an Arrhenius base, upon the introduction to water, the chemical must either cause, directly or otherwise:

  • a decrease in the aqueous hydronium concentration, or
  • an increase in the aqueous hydroxide concentration.

The definition is expressed in terms of an equilibrium expression:

acid + base ⇌ conjugate base + conjugate acid.

With an acid, HA, the equation can be written symbolically as:

HA + B ⇌ A + HB+

The equilibrium sign, ⇌, is used because the reaction can occur in both forward and backward directions. The acid, HA, can lose a hydron to become its conjugate base, A. The base, B, can accept a hydron to become its conjugate acid, HB+. Most acid-base reactions are fast so that the components of the reaction are usually in dynamic equilibrium with each other.

Brønsted-Lowry acids and bases

While the Arrhenius concept is useful for describing many reactions, it has limitations. In 1923, chemists Johannes Nicolaus Brønsted and Thomas Martin Lowry independently recognized that acid-base reactions involve the transfer of a hydron. A Brønsted-Lowry acid (or simply Brønsted acid) is a species that donates a hydron to a Brønsted-Lowry base. The Brønsted-Lowry acid-base theory has several advantages over the Arrhenius theory. Consider the following reactions of acetic acid (CH3COOH):

CH3COOH + H2O ⇌ CH3COO + H3O+ 

CH3COOH + NH3 ⇌ CH3COO + NH4+ 

Both theories easily describe the first reaction: CH3COOH acts as an Arrhenius acid because it acts as a source of H3O+ when dissolved in water, and it acts as a Brønsted acid by donating a hydron to water. In the second example CH3COOH undergoes the same transformation, in this case donating a hydron to ammonia (NH3), but it cannot be described using the Arrhenius definition of an acid because the reaction does not produce hydronium ions.

Lewis acids and bases

A third concept was proposed in 1923 by Gilbert N. Lewis which includes reactions with acid-base characteristics that do not involve a hydron transfer. A Lewis acid is a species that reacts with a Lewis base to form a Lewis adduct. The Lewis acid accepts a pair of electrons from another species; in other words, it is an electron pair acceptor. Brønsted acid-base reactions involve hydron transfer reactions while Lewis acid-base reactions involve electron pair transfers. All Brønsted acids are Lewis acids, but not all Lewis acids are Brønsted acids.

BF3 + F ⇌ BF4 

NH3 + H+ ⇌ NH4+ 

In the first example BF3 is a Lewis acid since it accepts an electron pair from the fluoride ion. This reaction cannot be described in terms of the Brønsted theory because there is no hydron transfer. The second reaction can be described using either theory. A hydron is transferred from an unspecified Brønsted acid to ammonia, a Brønsted base; alternatively, ammonia acts as a Lewis base and transfers a lone pair of electrons to form a bond with a hydrogen ion.

Hard and Soft Acids and Bases, Pearson’s HSAB

This theory proposes that soft acids react faster and form stronger bonds with soft bases, whereas hard acids react faster and form stronger bonds with hard bases, all other factors being equal. The classification in the original work was largely based on equilibrium constants for the reaction of two Lewis bases competing for a Lewis acid.

Hard acids and hard bases tend to have the following characteristics:

  • small atomic/ionic radius
  • high oxidation state
  • low polarizability
  • high electronegativity (bases)

Examples of hard acids are: H+, light alkali ions (Li through K are considered to have small ionic radii), Ti4+, Cr3+, Cr6+, BF3. Examples of hard bases are: OH, F, Cl, NH3, CH3COO, CO32-. The affinity of hard acids and hard bases for each other is mainly ionic in nature.
Soft acids and soft bases tend to have the following characteristics:

  • large atomic/ionic radius
  • low or zero oxidation state bonding
  • high polarizability
  • low electronegativity

Examples of soft acids are: CH3Hg+, Pt2+, Pd2+, Ag+, Au+, Hg2+, Hg22+, Cd2+, BH3. Examples of soft bases are: H, R3P, SCN, I. The affinity of soft acids and bases for each other is mainly covalent in nature.

HSAB acids and bases

 

This provides a qualitative approach to looking at the reactions of metal ions with various ligands since, from the diagram above, it is expected that whereas Al(III) and Ti(III) would prefer to react with O-species over S-species, the reverse would be predicted for Hg(II).

Lux-Flood acid-base definition

This acid-base theory was a revival of the oxygen theory of acids and bases, proposed by German chemist Hermann Lux in 1939 and further improved by Håkon Flood circa 1947. It is still used in modern geochemistry and for the electrochemistry of molten salts. This definition describes an acid as an oxide ion (O2-) acceptor and a base as an oxide ion donor. For example:

MgO (base) + CO2 (acid) ⇌ MgCO3

CaO (base) + SiO2 (acid) ⇌ CaSiO3

NO3 (base) + S2O72- (acid) ⇌ NO2+ + 2 SO42- 

 

Usanovich acid-base definition

Mikhail Usanovich developed a general theory that does not restrict acidity to hydrogen-containing compounds, and his approach, published in 1938, was even more general than the Lewis theory. Usanovich’s theory can be summarized as defining an acid as anything that accepts negative species, anions or electrons or donates positive ones, cations, and a base as the reverse. This definition could even be applied to the concept of redox reactions (oxidation-reduction) as a special case of acid-base reactions.

Some examples of Usanovich acid-base reactions include:

Na2O (base) + SO3 (acid) → 2Na+ + SO42- (species exchanged: anion O2- )
3(NH4)2S (base) + Sb2S5 (acid) → 6NH4+ + 2SbS43- (species exchanged: anion S2)
2Na (base) + Cl2 (acid) → 2Na+ + 2Cl (species exchanged: electron)

A comparison of the above definitions of Acids and Bases shows that the Usanovich concept encompasses all of the others but some feel that because of this it is too general to be useful.

Solvated H+ ions

The hydron (a completely free or “naked” hydrogen atomic nucleus) is far too reactive to exist in isolation and readily hydrates in aqueous solution. The simplest hydrated form of the hydrogen cation, the hydronium (hydroxonium) ion H3O+ (aq), is a key object of Arrhenius’ definition of acid. Other “simple” hydrated forms include the Zundel cation H5O2+ which is formed from a hydron and two water molecules, and the Eigen cation H9O4+, formed from a hydronium ion and three water molecules. The hydron itself is crucial in the more general Brønsted-Lowry acid-base theory, which extends the concept of acid-base chemistry beyond aqueous solutions. Both of these complexes represent ideal structures in a more general hydrogen bonded network defect. A freezing-point depression study determined that the mean hydration ion in cold water is on average approximately H3O+(H2O)6: where each hydronium ion is solvated by 6 water molecules. Some hydration structures are quite large: the H3O+.20H2O magic ion number structure (called magic because of its increased stability with respect to hydration structures involving a comparable number of water molecules).

 

hydronium ion (H3O+) Zundel cation (H5O2+) Eigen cation (H9O4+)

 

extended hydronium ion (H3O+.20H2O) H vibrations
simuulation by H. Rzepa

 

In 1806 Theodor Grotthuss proposed a theory of water conductivity. He envisioned the electrolytic reaction as a sort of “bucket line” where each oxygen atom simultaneously passes and receives a single hydrogen atom. It was an astonishing theory to propose at the time, since the water molecule was thought to be OH not H2O and the existence of ions was not fully understood. The theory became known as the Grotthuss mechanism. The transport mechanism is now thought to involve the inter-conversion between the Eigen and Zundel solvation structures, Eigen to Zundel to Eigen (E-Z-E). 

Water

Everything you wanted to know about water and more.

Water covers 71% of the Earth’s surface and is vital for all known forms of life. On Earth, 96.5% of the planet’s water is found in seas and oceans, 1.7% in groundwater, 1.7% in glaciers and the ice caps of Antarctica and Greenland, a small fraction in other large water bodies, and 0.001% in the air as vapour, clouds (formed from solid and liquid water particles suspended in air), and precipitation. 

Only 2.5% of the Earth’s water is freshwater, and 98.8% of that water is in ice and groundwater. Less than 0.3% of all freshwater is in rivers, lakes, and the atmosphere, and an even smaller amount of the Earth’s freshwater (0.003%) is contained within biological bodies and manufactured products.

The major chemical and physical properties of water are:

  • Water is a liquid at standard temperature and pressure. It is tasteless and odourless. The intrinsic colour of water and ice is a very slight blue hue, although both appear colourless in small quantities. Water vapour is essentially invisible as a gas.
  • Water is the only substance occurring naturally in all three phases as solid, liquid, and gas on the Earth’s surface
  • Water is transparent in the visible electromagnetic spectrum. Thus aquatic plants can live in water because sunlight can reach them. Infrared light is strongly absorbed by the hydrogen-oxygen or OH bonds.
  • Since the water molecule is not linear and the oxygen atom has a higher electronegativity than hydrogen atoms, the oxygen atom carries a partial negative charge, whereas the hydrogen atoms have partial positive charges. As a result, water is a polar molecule with an electrical dipole moment.
  • Water can form an unusually large number of intermolecular hydrogen bonds (four) for a molecule of its size. These factors lead to strong attractive forces between molecules of water, giving rise to water’s high surface tension and capillary forces. The capillary action refers to the tendency of water to move up a narrow tube against the force of gravity. This property is relied upon by all vascular plants, such as trees.
  • The boiling point of water (like all other liquids) is dependent on the barometric pressure. For example, on the top of Mount Everest water boils at 68 °C, compared to 100 °C at sea level at a similar latitude (since latitude modifies atmospheric pressure slightly). Conversely, water deep in the ocean near geothermal vents can reach temperatures of hundreds of degrees and remain liquid.
  • Water has a high specific heat capacity, 4181.3 J kg-1 K-1, as well as a high heat of vaporization (40.65 kJ mol-1), both result from the extensive hydrogen bonding between its molecules. These two unusual properties allow water to moderate Earth’s climate by buffering large fluctuations in temperature.
  • Solid ice has a density of 917 kg m-3. The maximum density of liquid water occurs at 3.98 °C where it is 1000 kg m-3.
  • Elements that are more electropositive than hydrogen such as lithium, sodium, calcium, potassium and caesium displace hydrogen from water, forming hydroxides. Since hydrogen is a flammable gas, when given off it is dangerous and the reaction of water with the more electropositive of these elements can be violently explosive so they are often stored in oil.

Most known pure substances display simple behaviour when they are cooled, they shrink. Liquids contract as they are cooled because the molecules move slower and they are less able to overcome the attractive intermolecular forces drawing them closer to each other. Once the freezing temperature is reached, the substances solidify, causing them to contract even more because crystalline solids are usually tightly packed.

Water however water has the anomalous property of becoming less dense when it is cooled to its solid form, ice.

When liquid water is cooled, it initially contracts as expected, until a temperature of 3.98 °C is reached (~4 °C). After that, it expands slightly until it reaches the freezing point, and then when it freezes, it expands by approximately 9%.

Just above the freezing point, the water molecules begin to locally arrange into ice-like structures with an extended hydrogen bonded network. This creates some “openness” in the liquid water, accounting for the decrease in its density. This is in opposition to the usual tendency for cooling to increase the density. At 3.98 °C these opposing tendencies cancel out, producing the density maximum. 

Since water expands to occupy a 9% greater volume in the form of ice and is less dense, it floats on liquid water, as in icebergs. Fortunately this happens, since in colder climates where water is susceptible to freezing, if it all turned solid during the winter, it would kill all the life within it.

The extended structure of the water molecule in liquid and solid form seen in the models below provides the explanation for the variation of density with temperature.

 

extended structure of liquid water extended structure of solid ice

Solvents

A solvent (from the Latin solvo-, “I loosen, untie, I solve”) is a substance that dissolves a solute (a chemically different liquid, solid or gas), resulting in a solution. A solvent is usually a liquid but can be a solid or a gas. The maximum quantity of solute that can dissolve in a specific volume of solvent varies with temperature.

Although many inorganic reactions take place in aqueous solution, water is not always the most suitable solvent; some reagents react violently or decompose in water (e.g. the alkali metals) and non-polar molecules are often insoluble in water.

Solvents can be broadly classified into two categories: polar and non-polar. Generally, the relative permittivity (εr, formerly called the dielectric constant) of the solvent provides a rough measure of a solvent’s polarity. This can be thought of as its ability to insulate charges from each other and acts as a reasonable predictor of the solvent’s ability to dissolve common ionic compounds, such as salts. The strong polarity of water is indicated, by a relative permittivity of 88 at 0 °C compared to 1 by definition for a vacuum. Solvents with a relative permittivity of less than 15 are generally considered to be nonpolar. 

Coulombic potential energy ∝ q1q2/(4 π ε0 εr r) 

A value of 88 can be considered to be a ‘high’ value and from the expression for Coulombic electric potential, the force between two point charges (or two ions) in aqueous solution is considerably reduced compared with that in a vacuum. Thus a dilute aqueous solution of a salt is considered to contain well-separated, non-interacting ions.

relative permittivity (dielectric constant) and NaCl dissolved in water

 

The polarity, dipole moment, polarizability and hydrogen bonding of a solvent determines what type of compounds it is able to dissolve and with what other solvents or liquid compounds it is miscible. Generally, polar solvents dissolve polar compounds best and non-polar solvents dissolve non-polar compounds best: “like dissolves like”. Strongly polar compounds like sugars (e.g., sucrose) or ionic compounds, like inorganic salts (e.g., table salt) dissolve only in very polar solvents like water, while strongly non-polar compounds like oils or waxes dissolve only in very non-polar organic solvents like hexane. Similarly, water and hexane (or vinegar and vegetable oil) are not miscible with each other and will quickly separate into two layers even after being shaken well.

Solvents with a relative static permittivity greater than 15 (i.e. polar or polarizable) can be further divided into protic and aprotic. Protic solvents solvate anions (negatively charged solutes) strongly via hydrogen bonding. Water is a protic solvent. Aprotic solvents such as acetone or dichloromethane tend to have large dipole moments (separation of partial positive and partial negative charges within the same molecule) and solvate positively charged species via their negative dipole.

Properties Table for some common solvents

Solvent Chemical formula Boiling point / °C Relative permittivity Density / g ml-1 Dipole moment / D

Non-polar solvents

Pentane C5H12 36  1.84 0.626 0.00
Hexane C5H14 69  1.88 0.655 0.00
Benzene C6H6 80  2.3 0.879 0.00
Toluene C6H5-CH3 111  2.38 0.867 0.36
Chloroform CHCl3 61  4.81 1.498 1.04
Dichloromethane CH2Cl2 40  9.1 1.3266 1.60

Polar aprotic solvents

Ethyl acetate AcO-Et 77  6.02 0.894 1.78
Acetone (CH3)2C=O 56  21 0.786 2.88
Dimethylformamide (DMF) (CH3)2NCH(=O) 153  38 0.944 3.82
Acetonitrile (MeCN) CH3-C≡N 82  37.5 0.786 3.92
Nitromethane CH3-NO2 100–103  35.87 1.1371 3.56
Propylene carbonate C4H6O3 240  64 1.205 4.9

Polar protic solvents

Formic acid H-C(=O)OH 101  58 1.21 1.41
n-Butanol CH3-CH2-CH2-CH2-OH 118  18 0.810 1.63
n-Propanol CH3-CH2-CH2-OH 97  20 0.803 1.68
Ethanol CH3-CH2-OH 79  24.55 0.789 1.69
Methanol CH3-OH 65  33 0.791 1.70
Acetic acid CH3-C(=O)OH 118  6.2 1.049 1.74
Water H-O-H 100  80 1.000 1.85

 

Acid and Base effects in non-aqueous solvents.

 

Levelling and differentiating solvents 

When a strong acid is dissolved in water, it fully dissociates to form the hydronium ion (H3O+). For example:

HA + H2O → A + H3O+ (where “HA” is a strong acid)

No acid can be stronger than H3O+ in H2O. Strong acids can be said to be “levelled” in water.

The same argument applies to bases. In water, OH is the strongest base. Thus, even though sodium amide (NaNH2) is an exceptional base (pKa of NH3 ~ 33), in water it is “levelled” and is only as strong as sodium hydroxide.

In a differentiating solvent, acids dissociate to varying degrees and thus have different strengths. In a levelling solvent, acids become completely dissociated and are thus of the same strength. A weakly basic solvent has less tendency than a strongly basic one to accept a hydron. Similarly a weak acid has less tendency to donate hydrons than a strong acid.

All acids tend to become indistinguishable in strength when dissolved in strongly basic solvents, owing to the greater affinity of strong bases for hydrons, (“levelling”).

A strong acid, such as perchloric acid, exhibits more strongly acidic properties than a weak acid, such as acetic acid, when dissolved in a weakly basic solvent, (“differentiation”).

For acids, strong bases are levelling solvents, weak bases are differentiating solvents.

Another classification of solvents is on the basis of hydron interaction:

  1. Protophilic solvents: Solvents which have greater tendency to accept hydrons, i.e., water, alcohol, liquid ammonia, etc.
  2. Protogenic solvents: Solvents which have the tendency to produce hydrons, i.e., water, liquid hydrogen chloride, glacial acetic acid, etc.
  3. Amphiprotic solvents: Solvents which act both as protophilic or protogenic, e.g., water, ammonia, ethyl alcohol, etc.
  4. Aprotic solvents: Solvents which neither donate nor accept hydrons, e.g., benzene, carbon tetrachloride, carbon disulfide, etc.

HCl acts as an acid in H2O, a stronger acid in NH3, a weak acid in CH3COOH, neutral in benzene and a weak base in HF.