How many scientists does it take to discover five elements? More than you might think…

My last post chronicled (see what I did there?) a meandering stroll through all 118 elements in the periodic table. As I read through all the pieces of that thread, I kept wanting to find out more about some of the stories. This is the international year of the periodic table, after all — what better time to go exploring?

But, here’s the thing: 118 is a lot. It took ages even just to collect all the (mostly less than) 280-character tweets together. Elemental stories span the whole of human existence and are endlessly fascinating, but telling all of them in any kind of detail would take whole book (not a small one, either) and would be a project years in the making. So, how about instead having a look at some notable landmarks? A sort of time-lapse version of elemental history and discovery, if you will…

Carbon

The word “carbon” comes from the Latin “carbo”, meaning coal and charcoal.

Let’s begin the story with carbon: fourth most abundant element in the universe and tenth most abundant in the Earth’s crust (give or take). When the Earth first formed, about 4.54 billion years ago, volcanic activity resulted in an atmosphere that was mostly carbon dioxide. The very earliest forms of life evolved to use carbon dioxide through photosynthesis. Carbon-based compounds make up the bulk of all life on this planet today, and carbon is the second most abundant element in the human body (after oxygen).

When we talk about discovering elements, our minds often leap to “who”. But, as we’ll see throughout this journey, that’s never an entirely straightforward question. The word “carbon” comes from the Latin carbo, meaning coal and charcoal. Humans have known about charcoal for many thousands of years — after all, if you can make a fire, it’s not long before you start to wonder if you can do something with this leftover black stuff. We’ll never know who first “discovered” carbon. But we can be sure of one thing: it definitely wasn’t an 18th century European scientist.

Diamond is a form of carbon used by humans for over 6000 years.

Then there are diamonds, although of course it took people a bit longer to understand how diamonds and other forms of carbon were connected. Human use of diamonds may go back further than we imagine, too. There’s evidence that the Chinese used diamonds to grind and polish ceremonia tools as long as 6,000 years ago.

Even the question of who first identified carbon as an element isn’t entirely straightforward. In 1722, René Antoine Ferchault de Réaumur demonstrated that iron was turned into steel by absorbing some substance. In 1772, Lavoisier showed for the first time that diamonds could burn (contrary to a key plot point in a 1998 episode of Columbo).

In 1779, Scheele demonstrated that graphite wasn’t lead, but rather was a form of charcoal that formed aerial acid (today known as carbonic acid) when it was burned and the products dissolved in water. In 1786 Claude Louis Berthollet, Gaspard Monge and C. A. Vandermonde again confirmed that graphite was mostly carbon, and in 1796, Smithson Tennant showed that burning diamond turned limewater milky — the established test for carbon dioxide gas — and argued that diamond and charcoal were  chemically identical.

Even that isn’t quite the end of the story: fullerenes were discovered 1985, and Harry Kroto, Robert Curl, and Richard Smalley were awarded a Nobel Prize for: “The discovery of carbon atoms bound in the form of a ball” in 1996.

Type “who discovered carbon” into a search engine and Lavoisier generally appears, but really? He was just one of many, most of whose names we’ll never know.

Zinc

Brass, an alloy of zinc, has been used for thousands of years.

Now for the other end of the alphabet: zinc. It’s another old one, although not quite as old as carbon. Zinc’s history is inextricably linked with copper, because zinc ores have been used to make brass alloys for thousands of years. Bowls made of alloyed tin, copper and zinc have been discovered which date back to at least 9th Century BCE, and many ornaments have been discovered which are around 2,500 years old.

It’s also been used in medicine for a very long time. Zinc carbonate pills, thought to have been used to treat eye conditions, have been found on a cargo ship wrecked off the Italian coast around 140 BCE, and zinc is mentioned in Indian and Greek medical texts as early as the 1st century CE. Alchemists burned zinc in air in 13th century India and collected the white, woolly tufts that formed. They called it philosopher’s wool, or nix alba (“white snow”). Today, we know the same thing as zinc oxide.

The name zinc, or something like it, was first documented by Paracelsus in the 16th century — who called it “zincum” or “zinken” in his book, Liber Mineralium II. The name might be derived from the German zinke, meaning “tooth-like” — because crystals of tin have a jagged, tooth-like appearance. But it could also suggest “tin-like”, since the German word zinn means tin. It might even be from the Persian word سنگ, “seng”, meaning stone.

These days, zinc is often used as a coating on other metals, to prevent corrosion.

P. M. de Respour formally reported that he had extracted metallic zinc from zinc oxide in 1668, although as I mentioned above, in truth it had been extracted centuries before then. In 1738, William Champion patented a process to extract zinc from calamine (a mixture of zinc oxide and iron oxide) in a vertical retort smelter, and Anton von Swab also distilled zinc from calamine in 1742.

Despite all that, credit for discovery of zinc usually goes to Andreas Marggraf, who’s generally considered the first to recognise zinc as a metal in its own right, in 1746.

Helium

Evidence of helium was first discovered during a solar eclipse.

Ironically for an element which is (controversially) used to fill balloons, helium’s discovery is easier to pin down. In fact, we can name a specific day: August 18, 1868. The astronomer Jules Janssen was studying the chromosphere of the sun during a total solar eclipse in Guntur, India, and found a bright, yellow line with a wavelength of 587.49 nm.

In case you thought this was going to be simple, though, he didn’t recognise the significance of the line immediately, thinking it was caused by sodium. But then, later the same year, Norman Lockyer also observed a yellow line in the solar spectrum — which he concluded was caused by an element in the Sun unknown on Earth. Lockyer and Edward Frankland named the element from the Greek word for the Sun, ἥλιος (helios).

Janssen and Lockyer may have identified helium, but they didn’t find it on Earth. That discovery was first made by Luigi Palmieri, analysing volcanic material from Mount Vesuvius in 1881. And it wasn’t until 1895 that William Ramsay first isolated helium by treating the mineral cleveite (formula UO2) with acid whilst looking for argon.

Mendeleev’s early versions of the periodic table, such as this one from 1871, did not include any of the noble gases (click for image source).

Interestingly, Mendeleev’s 1869 periodic table had no noble gases as there was very little evidence for them at the time. When Ramsay discovered argon, Mendeleev assumed it wasn’t an element because of its unreactivity, and it was several years before he was convinced that any of what we now call the noble gases should be included. As a result, helium didn’t appear in the periodic table until 1902.

Who shall we say discovered helium? The astronomers, who first identified it in our sun? Or the chemists, who managed to collect actual samples on Earth? Is an element truly “discovered” if you can’t prove you had actual atoms of it — even for a brief moment?

Francium

So far you may have noticed that all of these discoveries have been male dominated. This is almost certainly not because women were never involved in science, as there are plenty of records suggesting that women often worked in laboratories in various capacities — it’s just that their male counterparts usually reported the work. As a result the men got the fame, while the women’s stories were, a lot of the time, lost.

Marguerite Perey discovered francium (click for image source).

Of course, the name that jumps to mind at this point is Marie Curie, who famously discovered polonium and radium and had a third element, curium, named in honour of her and her husband’s work. But she’s famous enough. Let’s instead head over to the far left of the periodic table and have a look at francium.

Mendeleev predicted there ought to be an element here, following the trend of the alkali metals. He gave it the placeholder name of eka-caesium, but its existence wasn’t to be confirmed for some seventy years. A number of scientists claimed to have found it, but its discovery is formally recorded as having been made in January 1939 by Marguerite Perey. After all the previous failures, Perey was incredibly meticulous and thorough, carefully eliminating all possibility that the unknown element might be thorium, radium, lead, bismuth, or thallium.

Perey temporarily named the new alkali metal actinium-K (since it’s the result of alpha decay of 227Ac), and proposed the official name of catium (with the symbol Cm), since she believed it to be the most electropositive cation of the elements.

But the symbol Cm was assigned to curium, and Irène Joliot-Curie, one of Perey’s supervisors, argued against the name “catium”, feeling it suggested the element was something to do with cats. Perey then suggested francium, after her home country of France, and this was officially adopted in 1949.

A sample of uraninite containing perhaps 100,000 atoms of francium-223 (click for image source).

Francium was the last element to be discovered in nature. Trace amounts occur in uranium minerals, but it’s incredibly scarce. Its most stable isotope has a half life of just 22 minutes, and bulk francium has never been observed. Famously, there’s at most 30 g of francium in the Earth’s crust at any one time.

Of all the elements I’ve mentioned, this is perhaps the most clear-cut case. Perey deservedly takes the credit for discovering francium. But even then, she wouldn’t have been able to prove so conclusively that the element she found wasn’t something else had it not been for all the false starts that came before. And then there are all the other isotopes of francium, isolated by a myriad of scientists in the subsequent years…

Tennessine

All of which brings us to one of the last elements to be discovered: tennessine (which I jokingly suggested ought to be named octarine back in 2016). As I mentioned above, francium was the last element to be discovered in nature: tessessine doesn’t exist on Earth. It has only ever been created in a laboratory, by firing a calcium beam into a target made of berkelium (Bk) and smashing the two elements together in a process called nuclear fusion.

Element 117, tennessine, was named after Tennessee in the USA.

Like tennessine, berkelium isn’t available on Earth and had to be made in a nuclear reactor at Oak Ridge National Laboratory (ORNL) in Tennessee — the reason for the new element’s name. One of the scientists involved, Clarice E. Phelps, is believed to be the first African American to discover a chemical element in recent history, having worked on the purification of the 249Bk before it was shipped to Russia and used to help discover element 117.

Tennessine’s discovery was officially announced in Dubna in 2010 — the result of a Russian-American collaboration — and the name tennessine was officially adopted in November 2016.

Who discovered it? Well, the lead name on the paper published in Physical Review Letters is Yuri Oganessian (for whom element 118 was named), but have a look at that paper and you’ll see there’s a list of over 30 names, and that doesn’t even include all the other people who worked in the laboratories, making contributions as part of their daily work.

From five to many…

There’s a story behind every element, and it’s almost always one with a varied cast of characters.

As I said at the start, when we talk about discovering elements, our minds often leap to “who” — but they probably shouldn’t. Scientists really can’t work entirely alone: collaboration and communication are vital aspects of science, because without them everyone would have to start from scratch all the time, and humans would never have got beyond “fire, hot”. As Isaac Newton famously said in a letter in 1675: “If I have seen further it is by standing on the shoulders of giants.”

There’s a story behind every element, and it’s almost always one with a varied cast of characters.


This post was written by with the help of Kit Chapman (so, yes: it’s by Kit and Kat!). Kit’s new book, ‘Superheavy: Making and Breaking the Periodic Table‘, will be published by Bloomsbury Sigma on 13th June.


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. Content is © Kat Day 2019. You may share or link to anything here, but you must reference this site if you do. If you enjoy reading my blog, please consider buying me a coffee through Ko-fi using the button below.
Buy Me a Coffee at ko-fi.com

 

 

What is Water? The Element that Became a Compound

November 2018 marks the 235th anniversary of the day when Antoine Lavoisier proved water to be a compound, rather than an element.

I’m a few days late at the time of writing, but November 12th 2018 was the 235th anniversary of an important discovery. It was the day, in 1783, that Antoine Lavoisier formally declared water to be a compound, not an element.

235 years seems like an awfully long time, probably so long ago that no one knew anything very much. Practically still eye of newt, tongue of bat and leeches for everyone, right? Well, not quite. In fact, there was some nifty science and engineering going on at the time. It was the year that Jean-François Pilâtre de Rozier and François Laurent made the first untethered hot air balloon flight, for example. And chemistry was moving on swiftly: lots of elements had been isolated, including oxygen (1771, by Carl Wilhelm Scheele) and hydrogen (officially by Henry Cavendish in 1766, although others had observed it before he did).

Cavendish had reported that hydrogen produced water when it reacted with oxygen (known then as inflammable air and dephlogisticated air, respectively), and others had carried out similar experiments. However, at the time most chemists favoured phlogiston theory (hence the names) and tried to interpret and explain their results accordingly. Phlogiston theory was the idea that anything which burned contained a fire-like element called phlogiston, which was then “lost” when the substance burned and became “dephlogisticated”.

Cavendish, in particular, explained the fact that inflammable air (hydrogen) left droplets of “dew” behind when it burned in “common air” (the stuff in the room) in terms of phlogiston, by suggesting that water was present in each of the two airs before ignition.

Antoine-Laurent Lavoisier proved that water was a compound. (Line engraving by Louis Jean Desire Delaistre, after a design by Julien Leopold Boilly.)

Lavoisier was very much against phlogiston theory. He carried out experiments in closed vessels with enormous precision, going to great lengths to prove that many substances actually became heavier when they burned and not, as phlogiston theory would have it, lighter. In fact, it’s Lavoisier we have to thank for the names “hydrogen” and “oxygen”. Hydrogen is Greek for “water-former”, whilst oxygen means “acid former”.

When, in June 1783, Lavoisier found out about Cavendish’s experiment he immediately reacted oxygen with hydrogen to produce “water in a very pure state” and prove that the mass of the water which formed was equal to the combined masses of the hydrogen and oxygen he started with.

He then went on to decompose water into oxygen and hydrogen by heating a mixture of water and iron filings. The oxygen that formed combined with the iron to form iron oxide, and he collected the hydrogen gas over mercury. Thanks to his careful measurements, Lavoisier was able to demonstrate that the increased mass of the iron filings plus the mass of the collected gas was, again, equal to the mass of the water he had started with.

Water is a compound of hydrogen and oxygen, with the formula H2O.

There were still arguments, of course (there always are), but phlogiston theory was essentially doomed. Water was a compound, made of two elements, and the process of combustion was nothing more mysterious than elements combining in different ways.

As an aside, Scottish chemist Elizabeth Fulhame deserves a mention at this point. Just a few years after Lavoisier she went on to demonstrate through experiment that many oxidation reactions occur only in the presence of water, but the water is regenerated at the end of the reaction. She is credited today as the chemist who invented the concept of catalysis. (Which is a pretty important concept in chemistry, and yet her name never seems to come up…)

Anyway, proving water’s composition becomes a lot simpler when you have a ready supply of electricity. The first scientist to formally demonstrate this was William Nicholson, in 1800. He discovered that when leads from a battery are placed in water, the water breaks up to form hydrogen and oxygen bubbles, which can be collected separately at the submerged ends of the wires. This is the process we now know as electrolysis.

You can easily carry out the electrolysis of water at home.

In fact, this is a really easy (and safe, I promise!) experiment to do yourself, at home. I did it myself, using an empty TicTac box, two drawing pins, a 9V battery and a bit of baking soda (sodium hydrogencarbonate) dissolved in water – you need this because water on its own is a poor conductor.

The drawing pins are pushed through the bottom of the plastic box, the box is filled with the solution, and then it’s balanced on the terminals of the battery. I’ve used some small test tubes here to collect the gases, but you’ll be able to see the bubbles without them.

Bubbles start to appear immediately. I left mine for about an hour and a half, at which point the test tube on the negative terminal (the cathode) was completely full of gas, which produced a very satisfying squeaky pop when I placed it over a flame.

The positive electrode (the anode) ended up completely covered in what I’m pretty sure is a precipitate of iron hydroxide (the drawing pins presumably being plated steel), which meant that very little oxygen was produced after the first couple of minutes. This is why in proper electrolysis experiments inert graphite or, even better, platinum, electrodes are used. If you do that, you’ll get a 1:2 ratio by volume of oxygen to hydrogen, thus proving water’s formula (H2O) as well.

So there we have it: water is a compound, and not an element. And if you’d like to amuse everyone around the Christmas dinner table, you can prove it with a 9V battery and some drawing pins. Just don’t nick the battery out of your little brother’s favourite toy, okay? (Or, if you do, don’t tell him it was my idea.)


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. Content is © Kat Day 2018. You may share or link to anything here, but you must reference this site if you do.

If you enjoy reading my blog, please consider buying me a coffee through Ko-fi using the button below.
Buy Me a Coffee at ko-fi.com

Marvellous Mushroom Science

Glistening ink caps produce a dark, inky substance.

Yesterday I had the fantastic experience of a “fungi forage” with Dave Winnard from Discover the Wild, organised by Incredible Edible Oxford. There are few nicer things than wandering around beautiful Oxfordshire park- and woodland on a sunny October day, but Dave is also an incredibly knowledgeable guide. I’ve always thought mushrooms and fungi were interesting – living organisms that are neither plants nor animals and which we rely on for everything from antibiotics to soy sauce – but I had lots to learn.

Did you know, for example, that fungi form some of the largest living organisms on our planet? And that without them most of our green plants wouldn’t have evolved and probably wouldn’t be here today?

And from a practical point of view, what about the fact that people once used certain fungi to light fires? I’ve always imagined fungi as being quite wet things with a high water content (unless they’re deliberately dried, of course), but some are naturally very dry. Ötzi, the mummified man thought to have lived between 3400and 3100 BCE, was found with two types of fungus on him: birch fungus, which has antiparasitic properties, and a type of tinder fungus which can be ignited with a single spark and will smolder for days.

Coprine causes unpleasant symptoms, including nausea and vomiting, when consumed with alcohol.

Then, of course, there’s all the interesting chemistry. Early on in the day, we came across some glistening ink caps.The gills of these disintegrate to produce a black, inky liquid which contains a form of melanin and can be used as ink. And there’s more to this story: as I’ve already mentioned, fungi are not plants and they can’t photosynthesise, but it seems that some fungi do use melanin to harness gamma rays as energy for growth. Extra mushrooms for the Hulk’s breakfast, then?

Moving away from pigments for a moment, a related species to the glistening ink cap, the common ink cap, contains a chemical called coprine. This causes lots of unpleasant symptoms if it’s consumed with alcohol, similar to Disulfiram, the drug used to treat alcoholism. For this reason one of this mushroom’s other names is tippler’s bane. The coprine in the mushrooms effectively causes an instant hangover by accelerating the formation of acetaldehyde (also known as ethanal) from alcohol. Definitely don’t pair that mushroom omelette with a nice bottle of red and, worse, you’ll need to stay off the booze for a while: apparently the effects can linger for a full three days.

Yellow stainer mushrooms look like field mushrooms, but are poisonous.

We also came across some yellow stainer mushrooms. These look a lot like field mushrooms, but be careful – they aren’t edible. They cause nasty gastric sympoms and are reportedly responsible for most cases of mushroom poisoning in this country, although some people seem to be able to eat them without ill effect. They had a slightly chemically scent that reminded me “new trainer” smell – sort of rubbery and plasticky. It’s often described as phenolic, but I have to say I didn’t detect that myself – although yellow stainers have been shown to contain phenol and this could account for their poisonous nature. Anyway, it was an aroma that wouldn’t be entirely unpleasant if I were opening a new shoebox, but it wasn’t something I’d really want to eat. Apparently the smell gets stronger as you cook them, so don’t ignore what your nose is telling you if you think you have a nice pan of field mushrooms.

4,4′-Dimethoxyazobenzene is an azo dye.

The real giveaway with yellow stainers, though, is their tendency to turn yellow when bruised or scratched, hence the name. This, it seems, is due to 4,4′-dimethoxyazobenzene. The name might not be familiar, but A-level Chemistry students will recognise the structure: it’s an azo-dye. Quite apart from being a very useful word in Scrabble, azo compounds are well-known for their characteristic orange/yellow colours. It’s not really clear whether it forms in the mushroom due to some sort of oxidation reaction, or whether it’s in the cells anyway but only becomes visible when the cells are damaged. Either way, it’s something to look out for if you spot a patch of what look like field mushrooms.

The blushing wood mushroom.

We also came across several species which are safe to eat. One I might look out for in future is the blushing wood mushroom. As is often the way with fungi, the name is literal rather than merely poetic. These mushrooms have a light brown cap, beige gills, and a pale stem, but they turn bright red when cut or scratched due to the formation of an ortho-quinone. It’s quite a dramatic colour-change, and makes them pretty easy to identify. Apparently they’re normally uncommon here, but we found quite a lot of them, which might be something to do with this year’s unusally hot and dry summer.

Red ortho-quinone causes blushing wood mushrooms to literally blush.

I tried to find out the reasons for these colour-changes. In the plant and animal kingdoms pigments are usually there for good reason: camouflage, signalling and communication or, as with chlorophyll, as a way of making other substances. Fruits, for example, often turn bright red as they ripen because it makes them stand out from the green foilage and encourages animals to eat them so that the seeds can be spread. Likewise, they’re green when they’re unripe because it makes them less obvious and less appealing. But what’s the advantage for the mushroom to change colour once it’s already damaged? Perhaps there isn’t one, and it’s just an accident of their biology, but if so it seems strange that it’s a feature of several species. I couldn’t find the answer; if any mycologists are reading this and know, get in touch!

Velvet shank mushrooms.

Other edible species we met were fairy ring champignons, field blewits and jelly ear fungus – which literally looks like a sort of transparent ear. I’ll definitely be looking out for all of these in the future, but it’s important to watch out for dangerous lookalikes. Funeral bell mushrooms, for example, look like the velvet shank mushrooms we found but, once again, the name is quite literal – funeral bells contain amatoxins and eating them can cause kidney and liver failure. As Dave was keen to remind us: never eat anything you can’t confidently name!


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. Content is © Kat Day 2018. You may share or link to anything here, but you must reference this site if you do.

If you enjoy reading my blog, please consider buying me a coffee through Ko-fi using the button below.
Buy Me a Coffee at ko-fi.com

A tale of chemistry, biochemistry, physics and astronomy – and shiny, silver balls

A new school term has started here, and for me this year that’s meant more chemistry experiments – hurrah!

Okay, actually round-bottomed flasks

The other day it was time for the famous Tollens’ reaction. For those that don’t know, this involves a mixture of silver nitrate, sodium hydroxide and ammonia (which has to be freshly made every time as it doesn’t keep). Combine this concoction with an aldehyde in a glass container and warm it up a bit and it forms a beautiful silver layer on the glass. Check out my lovely silver balls!

This reaction is handy for chemists because the silver mirror only appears with aldehydes and not with other, similar molecules (such as ketones). It works because aldehydes are readily oxidised or, looking at it the other way round, the silver ions (Ag+) are readily reduced by the aldehyde to form silver metal (Ag) – check out this Compound Interest graphic for a bit more detail.

But this is not just the story of an interesting little experiment for chemists. No, this is a story of chemistry, biochemistry, physics, astronomy, and artisan glass bauble producers. Ready? Let’s get started!

Bernhard Tollens (click for link to image source)

The reaction is named after Bernhard Tollens, a German chemist who was born in the mid-19th century. It’s one of those odd situations where everyone – well, everyone who’s studied A level Chemistry anyway – knows the name, but hardly anyone seems to have any idea who the person was.

Tollens went to school in Hamburg, Germany, and his science teacher was Karl Möbius. No, not the Möbius strip inventor (that was August Möbius): Karl Möbius was a zoologist and a pioneer in the field of ecology. He must have inspired the young Tollens to pursue a scientific career, because after he graduated Tollens first completed an apprenticeship at a pharmacy before going on to study chemistry at Friedrich Wöhler’s laboratory in Göttingen. If Wöhler’s name seems familiar it’s because he was the co-discoverer of  beryllium and silicon – without which the electronics I’m using to write this article probably wouldn’t exist.

After he obtained his PhD Tollens worked at a bronze factory, but it wasn’t long before he left to begin working with none other than Emil Erlenmeyer – yes, he of the Erlenmeyer flask, otherwise known as… the conical flask. (I’ve finally managed to get around to mentioning the piece of glassware from which this blog takes its name!)

It seems though that Tollens had itchy feet, as he didn’t stay with Erlenmeyer for long, either. He worked in Paris and Portugal before eventually returning to Göttingen in 1872 to work on carbohydrates, going on to discover the structures of several sugars.

Table sugar is sucrose, which doesn’t produce a silver mirror with Tollens’ reagent

As readers of this blog will know, the term “sugar” often gets horribly misused by, well, almost everyone. It’s a broad term which very generally refers to carbon-based molecules containing groups of O-H and C=O atoms. Most significant to this story are the sugars called monosaccharides and disaccharides. The two most famous monosaccharides are fructose, or “fruit sugar”, and glucose. On the other hand sucrose, or “table sugar”, is a disaccharide.

All of the monosaccharides will produce a positive result with Tollens’ reagent (even when their structures don’t appear to contain an aldehyde group – this gets a bit complicated but check out this link if you’re interested). However, sucrose does not. Which means that Tollens’ reagent is quick and easy test that can be used to distinguish between glucose and sucrose.

Laboratory Dewar flask with silver mirror surface

And it’s not just useful for identifying sugars. Tollens’ reagent, or a variant of it, can also be used to create a high-quality mirror surface. Until the 1900s, if you wanted to make a mirror you had to apply a thin foil of an alloy – called “tain” – to the back of a piece of glass. It’s difficult to get a really good finish with this method, especially if you’re trying to create a mirror on anything other than a perfectly flat surface. If you wanted a mirrored flask, say to reduce heat radiation, this was tricky. Plus it required quite a lot of silver, which was expensive and made the finished item quite heavy.

Which was why the German chemist Justus von Liebig (yep, the one behind the Liebig condenser) developed a process for depositing a thin layer of pure silver on glass in 1835. After some tweaking and refining this was perfected into a method which bears a lot of resemblance to the Tollens’ reaction: a diamminesilver(I) solution is mixed with glucose and sprayed onto the surface of the glass, where the silver ions are reduced to elemental silver. This process ticked a lot of boxes: not only did it produce a high-quality finish, but it also used such a tiny quantity of silver that it was really cheap.

And it turned out to be useful for more than just laboratory glassware. The German astronomer Carl August von Steinheil and French doctor Leon Foucault soon began to use it to make telescope mirrors: for the first time astronomers had cheap, lightweight mirrors that reflected far more light than their old mirrors had ever done.

People also noticed how pretty the effect was: German artisans began to make Christmas tree decorations by pouring silver nitrate into glass spheres, followed by ammonia and finally a glucose solution – producing beautiful silver baubles which were exported all over the world, including to Britain.

These days, silvering is done by vacuum deposition, which produces an even more perfect surface, but you just can’t beat the magic of watching the inside of a test tube or a flask turning into a beautiful, shiny mirror.

Speaking of which, according to @MaChemGuy on Twitter, this is the perfect, foolproof, silver mirror method:
° Place 5 cm³ 0.1 mol dm⁻³ AgNO₃(aq) in a test tube.
° Add concentrated NH₃ dropwise untill the precipitate dissolves. (About 3 drops.)
° Add a spatula of glucose and dissolve.
° Plunge test-tube into freshly boiled water.

Silver nitrate stains the skin – wear gloves!

One word of warning: be careful with the silver nitrate and wear gloves. Else, like me, you might end up with brown stains on your hands that are still there three days later…


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. All content is © Kat Day 2018. You may share or link to anything here, but you must reference this site if you do.

If you enjoy reading my blog, please consider buying me a coffee through Ko-fi using the button below.
Buy Me a Coffee at ko-fi.com

Carbon dioxide: the good, the bad, and the future

Carbon dioxide is a small molecule with the structure O=C=O

Carbon dioxide has been in and out of the news this summer for one reason or another, but why? Is this stuff helpful, or heinous?

It’s certainly a significant part of our history. Let’s take that history to its literal limits and start at the very beginning. To quote the great Terry Pratchett: “In the beginning, there was nothing, which exploded.”

(Probably.) This happened around 13.8 billion years ago. Afterwards, stuff flew around for a while (forgive me, cosmologists). Then, about 4.5 billion years ago, the Earth formed out of debris that had collected around our Sun. Temperatures on this early Earth were extremely hot, there was a lot of volcanic activity, and there might have been some liquid water. The atmosphere was mostly hydrogen and helium.

The early Earth was bashed about by other space stuff, and one big collision almost certainly resulted in the formation of the Moon. A lot of other debris vaporised on impact releasing gases, and substances trapped within the Earth started to escape from its crust. The result was Earth’s so-called second atmosphere.

An artist’s concept of the early Earth. Image credit: NASA. (Click image for more.)

This is where carbon dioxide enters stage left… er… stage under? Anyway, it was there, right at this early point, along with water vapor, nitrogen, and smaller amounts of other gases. (Note, no oxygen, that is, O2 – significant amounts of that didn’t turn up for another 1.7 billion years, or 2.8 billion years ago.) In fact, carbon dioxide wasn’t just there, it made up most of Earth’s atmosphere, probably not so different from Mars’s atmosphere today.

The point being that carbon dioxide is not a new phenomenon. It is, in fact, the very definition of an old phenomenon. It’s been around, well, pretty much forever. And so has the greenhouse effect. The early Earth was hot. Really hot. Possibly 200 oC or so, because these atmospheric gases trapped the Sun’s heat. Over time, lots and lots of time, the carbon dioxide levels reduced as it became trapped in carbonate rocks, dissolved in the oceans and was utilised by lifeforms for photosynthesis.

Fast-forward a few billion years to the beginning of the twentieth century and atmospheric carbon dioxide levels were about 300 ppm (0.03%), tiny compared to oxygen (about 20%) and nitrogen (about 78%).

Chemists and carbon dioxide

Flemish chemist Jan Baptist van Helmont carried out an experiment which eventually led to the discovery of carbon dioxide gas.

Let’s pause there for a moment and have a little look at some human endeavours. In about 1640 Flemish chemist Jan Baptist van Helmont discovered that if he burned charcoal in a closed vessel, the mass of the resulting ash was much less than that of the original charcoal. He had no way of knowing, then, that he had formed and collected carbon dioxide gas, but he speculated that some of the charcoal had been transmuted into spiritus sylvestris, or “wild spirit”.

In 1754 Scottish chemist Joseph Black noticed that heating calcium carbonate, aka limestone, produced a gas which was heavier than air and which could “not sustain fire or animal life”. He called it “fixed air”, and he’s often credited with carbon dioxide’s discovery, although arguably van Helmont got there first. Black was also the first person to come up with the “limewater test“, where carbon dioxide is bubbled through a solution of calcium hydroxide. He used the test to demonstrate that carbon dioxide was produced by respiration, an experiment still carried out in schools more than 250 years later to show that the air we breathe out contains more carbon dioxide than the air we breathe in.

In 1772 that most famous of English chemists, Joseph Priestley, experimented with dripping sulfuric acid (or vitriolic acid, as he knew it) on chalk to produce a gas which could be dissolved in water. Priestley is often credited with the invention of soda water as a result (more on this in a bit), although physician Dr William Brownrigg probably discovered carbonated water earlier – but he never published his work.

In the late 1700s carbon dioxide became more widely known as “carbonic acid gas”, as seen in this article dated 1853. In 1823 Humphry Davy and Michael Faraday manged to produce liquified carbon dioxide at high pressures. Adrien-Jean-Pierre Thilorier was the first to describe solid carbon dioxide, in 1835. The name carbon dioxide was first used around 1869, when the term “dioxide” came into use.

A diagram from “Impregnating Water with Fixed Air”, printed for J. Johnson, No. 72, in St. Pauls Church-Yard, 1772.

Back to Priestley for a moment. In the late 1800s, a glass of volcanic spring water was a common treatment for digestive problems and general ailments. But what if you didn’t happen to live near a volcanic spring? Joseph Black, you’ll remember, had established that CO2 was produced by living organisms, so it occurred to Priestly that perhaps he could hang a vessel of water over a fermentation vat at a brewery and collect the gas that way.

But it wasn’t very efficient. As Priestly himself said, “the surface of the fixed air is exposed to the common air, and is considerably mixed with it, [and] water will not imbibe so much of it by the process above described.”

It was then that he tried his experiment with vitriolic acid, which allowed for much greater control over the carbonation process. Priestly proposed that the resulting “water impregnated with fixed air” might have a number of medical applications. In particular, perhaps because the water had an acidic taste in a similar way that lemon-infused water does, he thought it might be an effective treatment for scurvy. Legend has it that he gave the method to Captain Cook for his second voyage to the Pacific for this reason. It wouldn’t have helped of course, but it does mean that Cook and his crew were some of the first people to produce carbonated water for the express purpose of drinking a fizzy drink.

Refreshing fizz

You will have noticed that, despite all his work, there is no fizzy drink brand named Priestly (at least, not that I know of).

Joseph Priestley is credited with developing the first method for making carbonated water.

But there is one called Schweppes. That’s because a German watchmaker named Johann Jacob Schweppe spotted Priestley’s paper and worked out a simpler, more efficient process, using sodium bicarbonate and tartaric acid. He went on to found the Schweppes Company in Geneva in 1783.

Today, carbonated drinks are made a little differently. You may have heard about carbon dioxide shortages this summer in the U.K. These arose because these days carbon dioxide is actually collected as a by-product of other processes. In fact, after several bits of quite simple chemistry that add up to a really elegant sequence.

From fertiliser to fizzy drinks

It all begins, or more accurately ends, with ammonia fertiliser. As any GCSE science student who’s been even half paying attention can tell you, ammonia is made by reacting hydrogen with nitrogen during the Haber process. Nitrogen is easy to get hold of – as I’ve already said it makes up nearly 80% of our atmosphere – but hydrogen has to be made from hydrocarbons. Usually natural gas, or methane.

This involves another well-known process, called steam reforming, in which steam is reacted with methane at high temperatures in the presence of a nickel catalyst. This produces carbon monoxide, a highly toxic gas. But no problem! React that carbon monoxide with more water in the presence of a slightly different catalyst and you get even more hydrogen. And some carbon dioxide.

Fear not, nothing is wasted here! The CO2 is captured and liquified for all sorts of food-related and industrial uses, not least of which is fizzy drinks. This works well for all concerned because steam reforming produces large amounts of pure carbon dioxide. If you’re going to add it to food and drinks after all, you wouldn’t want a product contaminated with other gases.

Carbon dioxide is a by-product of fertiliser manufacture.

We ended up with a problem this summer in the U.K. because ammonia production plants operate on a schedule which is linked to the planting season. Farmers don’t usually apply fertiliser in the summer – when they’re either harvesting or about to harvest crops – so many ammonia plants shut down for maintenance in April, May, and June. This naturally leads to reduction in the amount of available carbon dioxide, but it’s not normally a problem because the downtime is relatively short and enough is produced the rest of year to keep manufacturers supplied.

This year, though, natural-gas prices were higher, while the price of ammonia stayed roughly the same. This meant that ammonia plants were in no great hurry to reopen, and that meant many didn’t start supplying carbon dioxide in July, just when a huge heatwave hit the UK, coinciding with the World Cup football (which tends to generate a big demand for fizzy pop, for some reason).

Which brings us back to our atmosphere…

Carbon dioxide calamity?

Isn’t there, you may be thinking, too much carbon dioxide in our atmosphere? In fact, that heatwave you just mentioned, wasn’t that a global warming thing?  Can’t we just… extract carbon dioxide from our air and solve everyone’s problems? Well, yes and no. Remember earlier when I said that at the beginning of the twentieth century and atmospheric carbon dioxide levels were about 300 ppm (0.03%)?

Over the last hundred years atmospheric carbon dioxide levels have increased from 0.03% to 0.04%

Today, a little over 100 years later, levels are about 0.04%. This is a significant increase in a relatively short period of time, but it’s still only a tiny fraction of our atmosphere (an important tiny fraction nonetheless – we’ll get to that in a minute).

It is possible to distill gases from our air by cooling air down until it liquefies and then separating the different components by their boiling points. For example nitrogen, N2, boils at a chilly -196 oC whereas oxygen, O2, boils at a mere -183 oC.

But there’s a problem: CO2 doesn’t have a liquid state at standard pressures. It forms a solid, which sublimes directly into a gas. For this reason carbon dioxide is usually removed from cryogenic distillation mixtures, because it would freeze solid and plug up the equipment. There are other ways to extract carbon dioxide from air but although they have important applications (keep reading) they’re not practical ways to produce large volumes of the gas for the food and drink industries.

Back to the environment for a moment: why is that teeny 0.04% causing us such headaches? How can a mere 400 CO2 molecules bouncing around with a million other molecules cause such huge problems?

For that, I need to take a little diversion to talk about infrared radiation, or IR.

Infrared radiation was first discovered by the astronomer William Herschel in 1800. He was trying to observe sun spots when he noticed that his red filter seemed to get particularly hot. In what I’ve always thought was a rather amazing intuitive leap, he then passed sunlight through a prism to split it, held a thermometer just beyond the red light that he could see with his eyes, and discovered that the thermometer showed a higher temperature than when placed in the visible spectrum.

He concluded that there must be an invisible form of light beyond the visible spectrum, and indeed there is: infrared light. It turns out that slightly more than half of the total energy from the Sun arrives on Earth in the form of infrared radiation.

What has this got to do with carbon dioxide? It turns out that carbon dioxide, or rather the double bonds O=C=O, absorb a lot of infrared radiation. By contrast, oxygen and nitrogen, which make up well over 90% of Earth’s atmosphere, don’t absorb infrared.

CO2 molecules also re-emit IR but, having bounced around a bit, not necessarily in the same direction and – and this is the reason that tiny amounts of carbon dioxide cause not so tiny problems – they transfer energy to other molecules in the atmosphere in the process. Think of each CO2 molecule as a drunkard stumbling through a pub, knocking over people’s pints and causing a huge bar brawl. A single disruptive individual can, indirectly, cause a lot of others to find themselves bruised and bleeding and wondering what the hell just happened.

Like carbon dioxide, water vapour also absorbs infrared, but it has a relatively short lifetime in our atmosphere.

Water vapor becomes important here too, because while O2 and N2 don’t absorb infrared, water vapour does. Water vapour has a relatively short lifetime in our atmosphere (about ten days compared to a decade for carbon dioxide) so its overall warming effect is less. Except that once carbon dioxide is thrown into the mix it transfers extra heat to the water, keeping it vapour (rather than, say, precipitating as rain) for longer and pushing up the temperature of the system even more.

Basically, carbon dioxide molecules trap heat near the planet’s surface. This is why carbon dioxide is described as a greenhouse gas and increasing levels are causing global warming. There are people who are still arguing this isn’t the case, but truly, they’ve got the wrong end of the (hockey) stick.

It’s not even a new concept. Over 100 years ago, in 1912, a short piece was published in the Rodney and Otamatea Times which said: “The furnaces of the world are now burning about 2,000,000,000 tons of coal a year. When this is burned, uniting with oxygen, it adds about  7,000,000,000 tons of carbon dioxide to the atmosphere yearly. This tends to make the air a more effective blanket for the earth and to raise its temperature.”

This summer has seen record high temperatures and some scientists have been warning of a “Hothouse Earth” scenario.

This 1912 piece suggested we might start to see effects in “centuries”. In fact, we’re seeing the results now. As I mentioned earlier, this summer has seen record high temperatures and some scientists have been warning of “Hothouse Earth” scenario, where rising temperatures cause serious disruptions to ecosystems, society, and economies. The authors stressed it’s not inevitable, but preventing it will require a collective effort. They even published a companion document which included several possible solutions which, oddly enough, garnered rather fewer column inches than the “we’re all going to die” angle.

Don’t despair, DO something…

But I’m going to mention it, because it brings us back to CO2. There’s too much of it in our atmosphere. How can we deal with that? It’s simple really: first, stop adding more, i.e. stop burning fossil fuels. We have other technologies for producing energy. The reason we’re still stuck on fossil fuels at this stage is politics and money, and even the most obese of the fat cats are starting to realise that money isn’t much use if you don’t have a habitable planet. Well, most of them. (There’s probably no hope for some people, but we can at least hope that their damage-doing days are limited.)

There are some other, perhaps less obvious, sources of carbon dioxide and other greenhouse gases that might also be reduced, such as livestock, cement for building materials and general waste.

Forests trap carbon dioxide in land carbon sinks. More biodiverse systems generally store more carbon.

And then, we’re back to taking the CO2 out of the atmosphere. How? Halting deforestation would allow more CO2 to be trapped in so-called land carbon sinks. Likewise, good agricultural soil management helps to trap carbon underground. More biodiverse systems generally store more carbon, so if we could try to stop wiping out land and coastal systems, that would be groovy too. Finally, there’s the technological solution: carbon capture and storage, or CSS.

This, in essence, involves removing CO2 from the atmosphere and storing it in geological formations. The same thing the Earth has done for millenia, but more quickly. It can also be linked to bio-energy production in a process known as BECCS. It sounds like the perfect solution, but right now it’s energy intensive and expensive, and there are concerns that BECCS projects could end up competing with agriculture and damaging conservation efforts.

A new answer from an ancient substance?

Forming magnesite, or magnesium carbonate, may be one way to trap carbon dioxide.

Some brand new research might offer yet another solution. It’s another carbon-capture technology which involves magnesium carbonate, or magnesite (MgCO3). Magnesite forms slowly on the Earth’s surface, over hundreds of thousands of years, trapping carbon dioxide in its structure as it does.

It can easily be made quickly at high temperatures, but of course if you have to heat things up, you need energy, which might end up putting as much CO2 back in as you’re managing to take out. Recently a team of researchers at Trent University in Canada have found a way to form magnesite quickly at room temperature using polystyrene microspheres.

This isn’t something which would make much difference if, say, you covered the roof of everyone’s house with the microspheres, but it could be used in fuel-burning power generators (which could be burning renewables or even waste materials) to effectively scrub the carbon dioxide from their emissions. That technology on its own would make a huge difference.

And so here we are. Carbon dioxide is one of the oldest substances there is, as “natural” as they come. From breathing to fizzy drinks to our climate, it’s entwined in every aspect of our everyday existence. It is both friend and foe. Will we work out ways to save ourselves from too much of it in our atmosphere? Personally, I’m optimistic, so long as we support scientists and engineers rather than fight them…


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. All content is © Kat Day 2018. You may share or link to anything here, but you must reference this site if you do.

If you enjoy reading my blog, please consider buying me a coffee (I promise to use a reusable cup) through Ko-fi using the button below.
Buy Me a Coffee at ko-fi.com

No need for slime panic: it’s not going to poison anyone

This is one of my favourite photos, so I’m using it again.

The school summer holidays are fast approaching and, for some reason, this always seems to get people talking about slime. Whether it’s because it’s a fun end-of-term activity, or it’s an easy bit of science for kids to do at home, or a bit of both, the summer months seem to love slimy stories. In fact, I wrote a piece about it myself in August 2017.

Which (hoho) brings me to the consumer group Which? because, on 17th July this year, they posted an article with the headline: “Children’s toy slime on sale with up to four times EU safety limit of potentially unsafe chemical” and the sub-heading: “Eight out of 11 popular children’s slimes we tested failed safety testing.”

The article is illustrated with lots of pots of colourful commercial slime pots with equally colourful names like Jupiter Juice. It says that, “exposure to excessive levels of boron could cause irritation, diarrhoea, vomiting and cramps in the short term,” and goes on to talk about possible risks of birth defects and developmental delays. Yikes. Apparently the retailer Amazon has removed several slime toys from sale since Which? got on the case.

The piece was, as you might expect, picked up by practically every news outlet there is, and within hours the internet was full of headlines warning of the dire consequences of handling multicoloured gloopy stuff.

Before I go any further, here’s a quick reminder: most slime is made by taking polyvinyl alcohol (PVA – the white glue stuff) and adding a borax solution, aka sodium tetraborate, which contains the element boron. The sodium tetraborate forms cross-links between the PVA polymer chains, and as a result you get viscous, slimy slime in place of runny, gluey stuff. Check out this lovely graphic created by @compoundchem for c&en’s Periodic Graphics:

The Chemistry of Slime from cen.acs.org (click image for link), created by Andy Brunning of @compoundchem

And so, back to the Which? article. Is the alarm justified? Should you ban your child from ever going near slime ever again?

Nah. Followers will remember that back in August last year, after I posted my own slime piece, I had a chat with boron-specialist David Schubert. He said at the time: “Borax has been repeated[ly] shown to be safe for skin contact. Absorption through intact skin is lower than the B consumed in a healthy diet” (B is the chemical symbol for the element boron). And then he directed me to a research paper backing up his comments.

Borax is a fine white powder, Mixed with water it can be used to make slime.

This, by the way, is all referring to the chemical borax – which you might use if you’re making slime. In pre-made slime the borax has chemically bonded with the PVA, and that very probably makes it even safer – because it’s then even more difficult for any boron to be absorbed through skin.

Of course, and this really falls under the category of “things no one should have to say,” don’t eat slime. Don’t let your kids eat slime. Although even if they did, the risks are really small. As David said when we asked this time: “Borates have low acute toxicity. Consumption of the amount of borax present in a handful of slime would make one sick to their stomach and possibly cause vomiting, but no other harm would result. The only way [they] could harm themselves is by eating that amount daily.”

It is true that borax comes with a “reproductive hazard” warning label. Which? pointed out in their article that there is EU guidance on safe boron levels, and the permitted level in children’s’ toys has been set at 300 mg/kg for liquids and sticky substances (Edited 18th July, see * in Notes section below).

EU safety limits are always very cautious – an additional factor of at least 100 is usually incorporated. In other words, for example, if 1 g/kg exposure of a substance is considered safe, the EU limit is likely to be set at 0.01 g/kg – so as to make sure that even someone who’s really going to town with a thing would be unlikely to suffer negative consequences as a result.

The boron limit is particularly cautious and is based on animal studies (and it has been challenged). The chemists I spoke to told me it’s not representative of the actual hazards. Boron chemist Beth Bosley pointed out that while it is true that boric acid exposure has been shown to cause fetal abnormalities when it’s fed to pregnant rats, this finding hasn’t been reproduced in humans. Workers handling large quantities of borate in China and Turkey have been studied and no reproductive effects have been seen.

Rat studies, she said, aren’t wholly comparable because rats are unable to vomit, which is significant because it means a rat can be fed a large quantity of a boron-containing substance and it’ll stay in their system. Whereas a human who accidentally ingested a similar dose would almost certainly throw up. Plus, again, this is all based on consuming substances such as borax, not slime where the boron is tied up in polymer chains. There really is no way anyone could conceivably eat enough slime to absorb these sorts of amounts.

These arguments aside, we all let our children handle things that might be harmful if they ate them. Swallowing a whole tube of toothpaste would probably give your child an upset stomach, and it could even be dangerous if they did it on a regular basis, but we haven’t banned toothpaste “just in case”. We keep it out of reach when they’re not supposed to be brushing their teeth, and we teach them not to do silly things like eating an entire tube of Oral-B. Same basic principle applies to slime, even if it does turn out to contain more boron than the EU guidelines permit.

In conclusion: pots of pre-made slime are safe, certainly from a borax/boron point of view, so long as you don’t eat them. The tiny amounts of boron that might be absorbed through skin are smaller than the amounts you’d get from eating nuts and pulses, and not at all hazardous.

Making slime at home can also be safe, if you follow some sensible guidelines like, say, these ones:

Stay safe with slime by following this guidance

Slime on, my chemistry-loving friends!


Notes:
* When I looked for boron safety limits the first time, the only number I could find was the rather higher 1200 mg/kg. So I asked Twitter if anyone could direct me to the value Which? were using. I was sent a couple of links, one of which contained a lot of technical documentation, but I think the most useful is probably a “guide to international toy safety” pamphlet which includes a “Soluble Element Migration Requirements” table. In the row for boron, under “Category II: Liquid or sticky materials”, the value is indeed given as 300 mg/kg.

BUT, there is also ” Category I: Dry, brittle, powder like or pliable materials” and the value there is the much higher 1,200 mg/kg. Which begs the question: does slime count as “pliable” or “sticky”? It suggests to me that, say, a modelling clay product (pliable) would have the 4x higher limit. But surely the risk of exposure would be essentially the same? If 1,200 mg/kg is okay for modelling clay, I can’t see why it shouldn’t be for slime. In the Which? testing, only the Jupiter Juice product exceeded the Category I limit, and then not by that much (1,400 mg/kg).

Also (the notes are going to end up being longer than the post if I’m not careful), these values are migration limits, not limits on the amount allowed in the substance in total. Can anyone show that more than 300 mg/kg is able to migrate from the slime to the person handling it? Very unlikey. But again, don’t eat slime.

This is not an invitation to try and prove me wrong.

I suppose it’s possible that someone could sell slime that’s contaminated with some other toxic thing. But that could happen with anything. The general advice to “wash your/their hands and don’t eat it” will take you a long way.


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. All content is © Kat Day 2018. You may share or link to anything here, but you must reference this site if you do.

If you enjoy reading my blog, please consider buying me a coffee (I’ll probably blow it on a really big bottle of PVA glue) through Ko-fi using the button below.
Buy Me a Coffee at ko-fi.com

 

Spectacular Strawberry Science!

Garden strawberries

Yay! It’s June! Do you know what that means, Chronicle Flask readers? Football? What do you mean, football? Who cares about that? (I jest – check out this excellent post from Compound Interest).

No, I mean it’s strawberry season in the U.K.! That means there will be much strawberry eating, because the supermarkets are full of very reasonably-priced punnets. There will also be strawberry picking, as we tramp along rows selecting the very juiciest fruits (and eating… well, just a few – it’s part of the fun, right?).

Is there any nicer fruit than these little bundles of red deliciousness? Surely not. (Although I do also appreciate a ripe blackberry.)

And as if their lovely taste weren’t enough, there’s loads of brilliant strawberry science, too!

This is mainly (well, sort of, mostly, some of the time) a chemistry blog, but the botany and history aspects of strawberries are really interesting too. The woodland strawberry (Fragaria vesca) was the first to be cultivated in the early 17th century, although strawberries have of course been around a lot longer than that. The word strawberry is thought to come from ‘streabariye’ – a term used by the Benedictine monk Aelfric in CE 995.

Woodland strawberries

Woodland strawberries, though, are small and round: very different from the large, tapering, fruits we tend to see in shops today (their botanical name is Fragaria × ananassa – the ‘ananassa’ bit meaning pineapple, referring to their sweet scent and flavour.

The strawberries we’re most familiar with were actually bred from two other varieties. That means that modern strawberries are, technically, a genetically modified organism. But no need to worry: practically every plant we eat today is.

Of course, almost everyone’s heard that strawberries are not, strictly, a berry. It’s true; technically strawberries are what’s known as an “aggregate accessory” fruit, which means that they’re formed from the receptacle (the thick bit of the stem where flowers emerge) that holds the ovaries, rather than from the ovaries themselves. But it gets weirder. Those things on the outside that look like seeds? Not seeds. No, each one is actually an ovary, with a seed inside it. Basically strawberries are plant genitalia. There’s something to share with Grandma over a nice cup of tea and a scone.

Anyway, that’s enough botany. Bring on the chemistry! Let’s start with the bright red colour. As with most fruits, that colour comes from anthocyanins – water-soluble molecules which are odourless, moderately astringent, and brightly-coloured. They’re formed from the reaction of, similar-sounding, molecules called anthocyanidins with sugars. The main anthocyanin in strawberries is callistephin, otherwise known as pelargonidin-3-O-glucoside. It’s also found in the skin of certain grapes.

Anthocyanins are fun for chemists because they change colour with pH. It’s these molecules which are behind the famous red-cabbage indicator. Which means, yes, you can make strawberry indicator! I had a go myself, the results are below…

Strawberry juice acts as an indicator: pinky-purplish in an alkaline solution, bright orange in an acid.

As you can see, the strawberry juice is pinky-purplish in the alkaline solution (sodium hydrogen carbonate, aka baking soda, about pH 9), and bright orange in the acid (vinegar, aka acetic acid, about pH 3). Next time you find a couple of mushy strawberries that don’t look so tasty, don’t throw them away – try some kitchen chemistry instead!

Peonidin-3-O-glucoside is the anthocyanin which gives strawberries their red colour. This is the form found at acidic pHs

The reason we see this colour-changing behaviour is that the anthocyanin pigment gains an -OH group at alkaline pHs, and loses it at acidic pHs (as in the diagram here).

This small change is enough to alter the wavelengths of light absorbed by the compound, so we see different colours. The more green light that’s absorbed, the more pink/purple the solution appears. The more blue light that’s absorbed, the more orange/yellow we see.

Interestingly, anthocyanins behave slightly differently to most other pH indicators, which usually acquire a proton (H+) at low pH, and lose one at high pH.

Moving on from colour, what about the famous strawberry smell and flavour? That comes from furaneol, which is sometimes called strawberry furanone or, less romantically, DMHF. It’s the same compound which gives pineapples their scent (hence that whole Latin ananassa thing I mentioned earlier). The concentration of furaneol increases as the strawberry ripens, which is why they smell stronger.

Along with menthol and vanillin, furaneol is one of the most widely-used compounds in the flavour industry. Pure furaneol is added to strawberry-scented beauty products to give them their scent, but only in small amounts – at high concentrations it has a strong caramel-like odour which, I’m told, can actually smell quite unpleasant.

As strawberries ripen their sugar content increases, they get redder, and they produce more scent

As strawberries ripen their sugar content (a mixture of fructose, glucose and sucrose) also changes, increasing from about 5% to 9% by weight. This change is driven by auxin hormones such as indole-3-acetic acid. At the same time, acidity – largely from citric acid – decreases.

Those who’ve been paying attention might be putting a few things together at this point: as the strawberry ripens, it becomes less acidic, which helps to shift its colour from more green-yellow-orange towards those delicious-looking purpleish-reds. It’s also producing more furaneol, making it smell yummy, and its sugar content is increasing, making it lovely and sweet. Why is all this happening? Because the strawberry wants (as much as a plant can want) to be eaten, but only once it’s ripe – because that’s how its seeds get dispersed. Ripening is all about making the fruit more appealing – redder, sweeter, and nicer-smelling – to things that will eat it. Nature’s clever, eh?

There we have it: some spectacular strawberry science! As a final note, as soon as I started writing this I (naturally) found lots of other blogs about strawberries and summer berries in general. They’re all fascinating. If you want to read more, check out…


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. All content is © Kat Day 2018. You may share or link to anything here, but you must reference this site if you do.

If you enjoy reading my blog, please consider buying me a coffee (I might spend it on an extra punnet of strawberries, mind you) through Ko-fi using the button below.
Buy Me a Coffee at ko-fi.com