Carbon dioxide: the good, the bad, and the future

Carbon dioxide is a small molecule with the structure O=C=O

Carbon dioxide has been in and out of the news this summer for one reason or another, but why? Is this stuff helpful, or heinous?

It’s certainly a significant part of our history. Let’s take that history to its literal limits and start at the very beginning. To quote the great Terry Pratchett: “In the beginning, there was nothing, which exploded.”

(Probably.) This happened around 13.8 billion years ago. Afterwards, stuff flew around for a while (forgive me, cosmologists). Then, about 4.5 billion years ago, the Earth formed out of debris that had collected around our Sun. Temperatures on this early Earth were extremely hot, there was a lot of volcanic activity, and there might have been some liquid water. The atmosphere was mostly hydrogen and helium.

The early Earth was bashed about by other space stuff, and one big collision almost certainly resulted in the formation of the Moon. A lot of other debris vaporised on impact releasing gases, and substances trapped within the Earth started to escape from its crust. The result was Earth’s so-called second atmosphere.

ttps://nai.nasa.gov/articles/2018/6/5/habitability-of-the-young-earth-could-boost-the-chances-of-life-elsewhere/” target=”_blank” rel=”noopener”> An artist’s concept of the early Earth. Image credit: NASA. (Click image for more.)

[/caption]This is where carbon dioxide enters stage left… er… stage under? Anyway, it was there, right at this early point, along with water vapor, nitrogen, and smaller amounts of other gases. (Note, no oxygen, that is, O2. Significant amounts of that didn’t turn up for another 1.7 billion years, or 2.8 billion years ago.) In fact, carbon dioxide wasn’t just there, it made up most of Earth’s atmosphere, probably not so different from Mars’s atmosphere today.

The point being that carbon dioxide is not a new phenomenon. It is, in fact, the very definition of an old phenomenon. It’s been around, well, pretty much forever. And so has the greenhouse effect. The early Earth was hot. Really hot. Possibly 200 oC or so, because these atmospheric gases trapped the Sun’s heat. Over time, lots and lots of time, the carbon dioxide levels reduced as it became trapped in carbonate rocks, dissolved in the oceans and was utilised by lifeforms for photosynthesis.

Fast-forward a few billion years to the beginning of the twentieth century and atmospheric carbon dioxide levels were about 300 ppm (0.03%), tiny compared to oxygen (about 20%) and nitrogen (about 78%).

Chemists and carbon dioxide

Jan Baptist van Helmontge-2795″ src=”https://thechronicleflask.files.wordpress.com/2018/08/jan_baptista_van_helmont.jpg?w=300″ alt=”” width=”200″ height=”181″ /> Flemish chemist discovered that if he burned charcoal in a closed vessel, the mass of the resulting ash was much less than that of the original charcoal.

Let’s[/caption]Let’s pause there for a moment and have a little look at some human endeavours. In about 1640 Flemish chemist Jan Baptist van Helmont discovered that if he burned charcoal in a closed vessel, the mass of the resulting ash was much less than that of the original charcoal. He had no way of knowing, then, that he had formed and collected carbon dioxide gas, but he speculated that some of the charcoal had been transmuted into spiritus sylvestris, or “wild spirit”.

In 1754 Scottish chemist Joseph Black noticed that heating calcium carbonate, aka limestone, produced a gas which was heavier than air and which could “not sustain fire or animal life”. He called it “fixed air”, and he’s often credited with carbon dioxide’s discovery, although arguably van Helmont got there first. Black was also the first person to come up with the “limewater test“, where carbon dioxide is bubbled through a solution of calcium hydroxide. He used the test to demonstrate that carbon dioxide was produced by respiration, an experiment still carried out in schools more than 250 years later to show that the air we breathe out contains more carbon dioxide than the air we breathe in.

In 1772 that most famous of English chemists, Joseph Priestley, experimented with dripping sulfuric acid (or vitriolic acid, as he knew it) on chalk to produce a gas which could be dissolved in water. Priestley is often credited with the invention of soda water as a result (more on this in a bit), although physician Dr William Brownrigg probably discovered carbonated water earlier – but he never published his work.

In the late 1700s carbon dioxide became more widely known as “carbonic acid gas”, as seen in this article dated 1853. In 1823 Humphry Davy and Michael Faraday manged to produce liquified carbon dioxide at high pressures. Adrien-Jean-Pierre Thilorier was the first to describe solid carbon dioxide, in 1835. The name carbon dioxide was first used around 1869, when the term “dioxide” came into use.

com/P/Priestley_Joseph/PriestleyJoseph-MakingCarbonatedWater1772.htm” target=”_blank” rel=”noopener”> A diagram from Priestly’s letter: “Impregnating Water with Fixed Air”. Printed for J. Johnson, No. 72, in St. Pauls Church-Yard, 1772. (Click image for paper)

Back to Priestle

[/caption]Back to Priestley for a moment. In the late 1800s, a glass of volcanic spring water was a common treatment for digestive problems and general ailments. But what if you didn’t happen to live near a volcanic spring? Joseph Black, you’ll remember, had established that CO2 was produced by living organisms, so it occurred to Priestly that perhaps he could hang a vessel of water over a fermentation vat at a brewery and collect the gas that way.

But it wasn’t very efficient. As Priestly himself said, “the surface of the fixed air is exposed to the common air, and is considerably mixed with it, [and] water will not imbibe so much of it by the process above described.”

It was then that he tried his experiment with vitriolic acid, which allowed for much greater control over the carbonation process. Priestly proposed that the resulting “water impregnated with fixed air” might have a number of medical applications. In particular, perhaps because the water had an acidic taste in a similar way that lemon-infused water does, he thought it might be an effective treatment for scurvy. Legend has it that he gave the method to Captain Cook for his second voyage to the Pacific for this reason. It wouldn’t have helped of course, but it does mean that Cook and his crew were some of the first people to produce carbonated water for the express purpose of drinking a fizzy drink.

Refreshing fizz

You will have noticed that, despite all his work, there is no fizzy drink brand named Priestly (at least, not that I know of).

Joseph Priestley is credited with developing the first method for making carbonated water.

But there is one called Schweppes. That’s because a German watchmaker named Johann Jacob Schweppe spotted Priestley’s paper and worked out a simpler, more efficient process, using sodium bicarbonate and tartaric acid. He went on to found the Schweppes Company in Geneva in 1783.

Today, carbonated drinks are made a little differently. You may have heard about carbon dioxide shortages this summer in the U.K. These arose because these days carbon dioxide is actually collected as a by-product of other processes. In fact, after several bits of quite simple chemistry that add up to a really elegant sequence.

From fertiliser to fizzy drinks

It all begins, or more accurately ends, with ammonia fertiliser. As any GCSE science student who’s been even half paying attention can tell you, ammonia is made by reacting hydrogen with nitrogen during the Haber process. Nitrogen is easy to get hold of – as I’ve already said it makes up nearly 80% of our atmosphere – but hydrogen has to be made from hydrocarbons. Usually natural gas, or methane.

This involves another well-known process, called steam reforming, in which steam is reacted with methane at high temperatures in the presence of a nickel catalyst. This produces carbon monoxide, a highly toxic gas. But no problem! React that carbon monoxide with more water in the presence of a slightly different catalyst and you get even more hydrogen. And some carbon dioxide.

Fear not, nothing is wasted here! The CO2 is captured and liquified for all sorts of food-related and industrial uses, not least of which is fizzy drinks. This works well for all concerned because steam reforming produces large amounts of pure carbon dioxide. If you’re going to add it to food and drinks after all, you wouldn’t want a product contaminated with other gases.

Carbon dioxide is a by-product of fertiliser manufacture.

We ended up with a problem this summer in the U.K. because ammonia production plants operate on a schedule which is linked to the planting season. Farmers don’t usually apply fertiliser in the summer – when they’re either harvesting or about to harvest crops – so many ammonia plants shut down for maintenance in April, May, and June. This naturally leads to reduction in the amount of available carbon dioxide, but it’s not normally a problem because the downtime is relatively short and enough is produced the rest of year to keep manufacturers supplied.

This year, though, natural-gas prices were higher, while the price of ammonia stayed roughly the same. This meant that ammonia plants were in no great hurry to reopen, and that meant many didn’t start supplying carbon dioxide in July, just when a huge heatwave hit the UK, coinciding with the World Cup football (which tends to generate a big demand for fizzy pop, for some reason).

Which brings us back to our atmosphere…

Carbon dioxide calamity?

Isn’t there, you may be thinking, too much carbon dioxide in our atmosphere? In fact, that heatwave you just mentioned, wasn’t that a global warming thing?  Can’t we just… extract carbon dioxide from our air and solve everyone’s problems? Well, yes and no. Remember earlier when I said that at the beginning of the twentieth century and atmospheric carbon dioxide levels were about 300 ppm (0.03%)?

Over the last hundred years atmospheric carbon dioxide levels have increased from 0.03% to 0.04%

Today, a little over 100 years later, levels are about 0.04%. This is a significant increase in a relatively short period of time, but it’s still only a tiny fraction of our atmosphere (an important tiny fraction nonetheless – we’ll get to that in a minute).

It is possible to distill gases from our air by cooling air down until it liquefies and then separating the different components by their boiling points. For example Nitrogen, N2, boils at a chilly -196 oC whereas oxygen, O2, boils at a mere 183 oC.

But there’s a problem: CO2 doesn’t have a liquid state at standard pressures. It forms a solid, which sublimes directly into a gas. For this reason carbon dioxide is usually removed from cryogenic distillation mixtures, because it would freeze solid and plug up the equipment. There are other ways to extract carbon dioxide from air but although they have important applications (keep reading) they’re not practical ways to produce large volumes of the gas for the food and drink industries.

Back to the environment for a moment: why is that teeny 0.04% causing us such headaches? How can a mere 400 CO2 molecules bouncing around with a million other molecules cause such huge problems?

For that, I need to take a little diversion to talk about infrared radiation, or IR.

Infrared radiation was first discovered by the astronomer William Herschel in 1800. He was trying to observe sun spots when he noticed that his red filter seemed to get particularly hot. In what I’ve always thought was a rather amazing intuitive leap, he then passed sunlight through a prism to split it, held a thermometer just beyond the red light that he could see with his eyes, and discovered that the thermometer showed a higher temperature than when placed in the visible spectrum.

He concluded that there must be an invisible form of light beyond the visible spectrum, and indeed there is: infrared light. It turns out that slightly more than half of the total energy from the Sun arrives on Earth in the form of infrared radiation.

What has this got to do with carbon dioxide? It turns out that carbon dioxide, or rather the double bonds O=C=O, absorb a lot of infrared radiation. By contrast, oxygen and nitrogen, which make up well over 90% of Earth’s atmosphere, don’t absorb infrared.

CO2 molecules also re-emit IR but, having bounced around a bit, not necessarily in the same direction and – and this is the reason that tiny amounts of carbon dioxide cause not so tiny problems – they transfer energy to other molecules in the atmosphere in the process. Think of each CO2 molecule as a drunkard stumbling through a pub, knocking over people’s pints and causing a huge bar brawl. A single disruptive individual can, indirectly, cause a lot of others to find themselves bruised and bleeding and wondering what the hell just happened.

Like carbon dioxide, water vapour also absorbs infrared, but it has a relatively short lifetime in our atmosphere.

Water vapor becomes important here too, because while O2 and N2 don’t absorb infrared, water vapour does. Water vapour has a relatively short lifetime in our atmosphere (about ten days compared to a decade for carbon dioxide) so its overall warming effect is less. Except that once carbon dioxide is thrown into the mix it transfers extra heat to the water, keeping it vapour (rather than, say, precipitating as rain) for longer and pushing up the temperature of the system even more.

Basically, carbon dioxide molecules trap heat near the planet’s surface. This is why carbon dioxide is described as a greenhouse gas and increasing levels are causing global warming. There are people who are still arguing this isn’t the case, but truly, they’ve got the wrong end of the (hockey) stick.

It’s not even a new concept. Over 100 years ago, in 1912, a short piece was published in the Rodney and Otamatea Times which said: “The furnaces of the world are now burning about 2,000,000,000 tons of coal a year. When this is burned, uniting with oxygen, it adds about  7,000,000,000 tons of carbon dioxide to the atmosphere yearly. This tends to make the air a more effective blanket for the earth and to raise its temperature.”

This summer has seen record high temperatures and some scientists have been warning of a “Hothouse Earth” scenario.

This 1912 piece suggested we might start to see effects in “centuries”. In fact, we’re seeing the results now. As I mentioned earlier, this summer has seen record high temperatures and some scientists have been warning of “Hothouse Earth” scenario, where rising temperatures cause serious disruptions to ecosystems, society, and economies. The authors stressed it’s not inevitable, but preventing it will require a collective effort. They even published a companion document which included several possible solutions which, oddly enough, garnered rather fewer column inches than the “we’re all going to die” angle.

Don’t despair, DO something…

But I’m going to mention it, because it brings us back to CO2. There’s too much of it in our atmosphere. How can we deal with that? It’s simple really: first, stop adding more, i.e. stop burning fossil fuels. We have other technologies for producing energy. The reason we’re still stuck on fossil fuels at this stage is politics and money, and even the most obese of the fat cats are starting to realise that money isn’t much use if you don’t have a habitable planet. Well, most of them. (There’s probably no hope for some people, but we can at least hope that their damage-doing days are limited.)

There are some other, perhaps less obvious, sources of carbon dioxide and other greenhouse gases that might also be reduced, such as livestock, cement for building materials and general waste.

Forrests trap carbon dioxide in land carbon sinks. More biodiverse systems generally store more carbon.

And then, we’re back to taking the CO2 out of the atmosphere. How? Halting deforestation would allow more CO2 to be trapped in so-called land carbon sinks. Likewise, good agricultural soil management helps to trap carbon underground. More biodiverse systems generally store more carbon, so if we could try to stop wiping out land and coastal systems, that would be groovy too. Finally, there’s the technological solution: carbon capture and storage, or CSS.

This, in essence, involves removing CO2 from the atmosphere and storing it in geological formations. The same thing the Earth has done for millenia, but more quickly. It can also be linked to bio-energy production in a process known as BECCS. It sounds like the perfect solution, but right now it’s energy intensive and expensive, and there are concerns that BECCS projects could end up competing with agriculture and damaging conservation efforts.

A new answer from an ancient substance?

Forming magnesite, or magnesium carbonate, may be one way to trap carbon dioxide.

Some brand new research might offer yet another solution. It’s another carbon-capture technology which involves magnesium carbonate, or magnesite (MgCO3). Magnesite forms slowly on the Earth’s surface, over hundreds of thousands of years, trapping carbon dioxide in its structure as it does.

It can easily be made quickly at high temperatures, but of course if you have to heat things up, you need energy, which might end up putting as much CO2 back in as you’re managing to take out. Recently a team of researchers at Trent University in Canada have found a way to form magnesite quickly at room temperature using polystyrene microspheres.

This isn’t something which would make much difference if, say, you covered the roof of everyone’s house with the microspheres, but it could be used in fuel-burning power generators (which could be burning renewables or even waste materials) to effectively scrub the carbon dioxide from their emissions. That technology on its own would make a huge difference.

And so here we are. Carbon dioxide is one of the oldest substances there is, as “natural” as they come. From breathing to fizzy drinks to our climate, it’s entwined in every aspect of our everyday existence. It is both friend and foe. Will we work out ways to save ourselves from too much of it in our atmosphere? Personally, I’m optimistic, so long as we support scientists and engineers rather than fight them…


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. All content is © Kat Day 2018. You may share or link to anything here, but you must reference this site if you do.

If you enjoy reading my blog, please consider buying me a coffee (I promise to use a reusable cup) through Ko-fi using the button below.
Buy Me a Coffee at ko-fi.com

Advertisements

No need for slime panic: it’s not going to poison anyone

This is one of my favourite photos, so I’m using it again.

The school summer holidays are fast approaching and, for some reason, this always seems to get people talking about slime. Whether it’s because it’s a fun end-of-term activity, or it’s an easy bit of science for kids to do at home, or a bit of both, the summer months seem to love slimy stories. In fact, I wrote a piece about it myself in August 2017.

Which (hoho) brings me to the consumer group Which? because, on 17th July this year, they posted an article with the headline: “Children’s toy slime on sale with up to four times EU safety limit of potentially unsafe chemical” and the sub-heading: “Eight out of 11 popular children’s slimes we tested failed safety testing.”

The article is illustrated with lots of pots of colourful commercial slime pots with equally colourful names like Jupiter Juice. It says that, “exposure to excessive levels of boron could cause irritation, diarrhoea, vomiting and cramps in the short term,” and goes on to talk about possible risks of birth defects and developmental delays. Yikes. Apparently the retailer Amazon has removed several slime toys from sale since Which? got on the case.

The piece was, as you might expect, picked up by practically every news outlet there is, and within hours the internet was full of headlines warning of the dire consequences of handling multicoloured gloopy stuff.

Before I go any further, here’s a quick reminder: most slime is made by taking polyvinyl alcohol (PVA – the white glue stuff) and adding a borax solution, aka sodium tetraborate, which contains the element boron. The sodium tetraborate forms cross-links between the PVA polymer chains, and as a result you get viscous, slimy slime in place of runny, gluey stuff. Check out this lovely graphic created by @compoundchem for c&en’s Periodic Graphics:

The Chemistry of Slime from cen.acs.org (click image for link), created by Andy Brunning of @compoundchem

And so, back to the Which? article. Is the alarm justified? Should you ban your child from ever going near slime ever again?

Nah. Followers will remember that back in August last year, after I posted my own slime piece, I had a chat with boron-specialist David Schubert. He said at the time: “Borax has been repeated[ly] shown to be safe for skin contact. Absorption through intact skin is lower than the B consumed in a healthy diet” (B is the chemical symbol for the element boron). And then he directed me to a research paper backing up his comments.

Borax is a fine white powder, Mixed with water it can be used to make slime.

This, by the way, is all referring to the chemical borax – which you might use if you’re making slime. In pre-made slime the borax has chemically bonded with the PVA, and that very probably makes it even safer – because it’s then even more difficult for any boron to be absorbed through skin.

Of course, and this really falls under the category of “things no one should have to say,” don’t eat slime. Don’t let your kids eat slime. Although even if they did, the risks are really small. As David said when we asked this time: “Borates have low acute toxicity. Consumption of the amount of borax present in a handful of slime would make one sick to their stomach and possibly cause vomiting, but no other harm would result. The only way [they] could harm themselves is by eating that amount daily.”

It is true that borax comes with a “reproductive hazard” warning label. Which? pointed out in their article that there is EU guidance on safe boron levels, and the permitted level in children’s’ toys has been set at 300 mg/kg for liquids and sticky substances (Edited 18th July, see * in Notes section below).

EU safety limits are always very cautious – an additional factor of at least 100 is usually incorporated. In other words, for example, if 1 g/kg exposure of a substance is considered safe, the EU limit is likely to be set at 0.01 g/kg – so as to make sure that even someone who’s really going to town with a thing would be unlikely to suffer negative consequences as a result.

The boron limit is particularly cautious and is based on animal studies (and it has been challenged). The chemists I spoke to told me it’s not representative of the actual hazards. Boron chemist Beth Bosley pointed out that while it is true that boric acid exposure has been shown to cause fetal abnormalities when it’s fed to pregnant rats, this finding hasn’t been reproduced in humans. Workers handling large quantities of borate in China and Turkey have been studied and no reproductive effects have been seen.

Rat studies, she said, aren’t wholly comparable because rats are unable to vomit, which is significant because it means a rat can be fed a large quantity of a boron-containing substance and it’ll stay in their system. Whereas a human who accidentally ingested a similar dose would almost certainly throw up. Plus, again, this is all based on consuming substances such as borax, not slime where the boron is tied up in polymer chains. There really is no way anyone could conceivably eat enough slime to absorb these sorts of amounts.

These arguments aside, we all let our children handle things that might be harmful if they ate them. Swallowing a whole tube of toothpaste would probably give your child an upset stomach, and it could even be dangerous if they did it on a regular basis, but we haven’t banned toothpaste “just in case”. We keep it out of reach when they’re not supposed to be brushing their teeth, and we teach them not to do silly things like eating an entire tube of Oral-B. Same basic principle applies to slime, even if it does turn out to contain more boron than the EU guidelines permit.

In conclusion: pots of pre-made slime are safe, certainly from a borax/boron point of view, so long as you don’t eat them. The tiny amounts of boron that might be absorbed through skin are smaller than the amounts you’d get from eating nuts and pulses, and not at all hazardous.

Making slime at home can also be safe, if you follow some sensible guidelines like, say, these ones:

Stay safe with slime by following this guidance

Slime on, my chemistry-loving friends!


Notes:
* When I looked for boron safety limits the first time, the only number I could find was the rather higher 1200 mg/kg. So I asked Twitter if anyone could direct me to the value Which? were using. I was sent a couple of links, one of which contained a lot of technical documentation, but I think the most useful is probably a “guide to international toy safety” pamphlet which includes a “Soluble Element Migration Requirements” table. In the row for boron, under “Category II: Liquid or sticky materials”, the value is indeed given as 300 mg/kg.

BUT, there is also ” Category I: Dry, brittle, powder like or pliable materials” and the value there is the much higher 1,200 mg/kg. Which begs the question: does slime count as “pliable” or “sticky”? It suggests to me that, say, a modelling clay product (pliable) would have the 4x higher limit. But surely the risk of exposure would be essentially the same? If 1,200 mg/kg is okay for modelling clay, I can’t see why it shouldn’t be for slime. In the Which? testing, only the Jupiter Juice product exceeded the Category I limit, and then not by that much (1,400 mg/kg).

Also (the notes are going to end up being longer than the post if I’m not careful), these values are migration limits, not limits on the amount allowed in the substance in total. Can anyone show that more than 300 mg/kg is able to migrate from the slime to the person handling it? Very unlikey. But again, don’t eat slime.

This is not an invitation to try and prove me wrong.

I suppose it’s possible that someone could sell slime that’s contaminated with some other toxic thing. But that could happen with anything. The general advice to “wash your/their hands and don’t eat it” will take you a long way.


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. All content is © Kat Day 2018. You may share or link to anything here, but you must reference this site if you do.

If you enjoy reading my blog, please consider buying me a coffee (I’ll probably blow it on a really big bottle of PVA glue) through Ko-fi using the button below.
Buy Me a Coffee at ko-fi.com

 

Spectacular Strawberry Science!

Garden strawberries

Yay! It’s June! Do you know what that means, Chronicle Flask readers? Football? What do you mean, football? Who cares about that? (I jest – check out this excellent post from Compound Interest).

No, I mean it’s strawberry season in the U.K.! That means there will be much strawberry eating, because the supermarkets are full of very reasonably-priced punnets. There will also be strawberry picking, as we tramp along rows selecting the very juiciest fruits (and eating… well, just a few – it’s part of the fun, right?).

Is there any nicer fruit than these little bundles of red deliciousness? Surely not. (Although I do also appreciate a ripe blackberry.)

And as if their lovely taste weren’t enough, there’s loads of brilliant strawberry science, too!

This is mainly (well, sort of, mostly, some of the time) a chemistry blog, but the botany and history aspects of strawberries are really interesting too. The woodland strawberry (Fragaria vesca) was the first to be cultivated in the early 17th century, although strawberries have of course been around a lot longer than that. The word strawberry is thought to come from ‘streabariye’ – a term used by the Benedictine monk Aelfric in CE 995.

Woodland strawberries

Woodland strawberries, though, are small and round: very different from the large, tapering, fruits we tend to see in shops today (their botanical name is Fragaria × ananassa – the ‘ananassa’ bit meaning pineapple, referring to their sweet scent and flavour.

The strawberries we’re most familiar with were actually bred from two other varieties. That means that modern strawberries are, technically, a genetically modified organism. But no need to worry: practically every plant we eat today is.

Of course, almost everyone’s heard that strawberries are not, strictly, a berry. It’s true; technically strawberries are what’s known as an “aggregate accessory” fruit, which means that they’re formed from the receptacle (the thick bit of the stem where flowers emerge) that holds the ovaries, rather than from the ovaries themselves. But it gets weirder. Those things on the outside that look like seeds? Not seeds. No, each one is actually an ovary, with a seed inside it. Basically strawberries are plant genitalia. There’s something to share with Grandma over a nice cup of tea and a scone.

Anyway, that’s enough botany. Bring on the chemistry! Let’s start with the bright red colour. As with most fruits, that colour comes from anthocyanins – water-soluble molecules which are odourless, moderately astringent, and brightly-coloured. They’re formed from the reaction of, similar-sounding, molecules called anthocyanidins with sugars. The main anthocyanin in strawberries is callistephin, otherwise known as pelargonidin-3-O-glucoside. It’s also found in the skin of certain grapes.

Anthocyanins are fun for chemists because they change colour with pH. It’s these molecules which are behind the famous red-cabbage indicator. Which means, yes, you can make strawberry indicator! I had a go myself, the results are below…

Strawberry juice acts as an indicator: pinky-purplish in an alkaline solution, bright orange in an acid.

As you can see, the strawberry juice is pinky-purplish in the alkaline solution (sodium hydrogen carbonate, aka baking soda, about pH 9), and bright orange in the acid (vinegar, aka acetic acid, about pH 3). Next time you find a couple of mushy strawberries that don’t look so tasty, don’t throw them away – try some kitchen chemistry instead!

Peonidin-3-O-glucoside is the anthocyanin which gives strawberries their red colour. This is the form found at acidic pHs

The reason we see this colour-changing behaviour is that the anthocyanin pigment gains an -OH group at alkaline pHs, and loses it at acidic pHs (as in the diagram here).

This small change is enough to alter the wavelengths of light absorbed by the compound, so we see different colours. The more green light that’s absorbed, the more pink/purple the solution appears. The more blue light that’s absorbed, the more orange/yellow we see.

Interestingly, anthocyanins behave slightly differently to most other pH indicators, which usually acquire a proton (H+) at low pH, and lose one at high pH.

Moving on from colour, what about the famous strawberry smell and flavour? That comes from furaneol, which is sometimes called strawberry furanone or, less romantically, DMHF. It’s the same compound which gives pineapples their scent (hence that whole Latin ananassa thing I mentioned earlier). The concentration of furaneol increases as the strawberry ripens, which is why they smell stronger.

Along with menthol and vanillin, furaneol is one of the most widely-used compounds in the flavour industry. Pure furaneol is added to strawberry-scented beauty products to give them their scent, but only in small amounts – at high concentrations it has a strong caramel-like odour which, I’m told, can actually smell quite unpleasant.

As strawberries ripen their sugar content increases, they get redder, and they produce more scent

As strawberries ripen their sugar content (a mixture of fructose, glucose and sucrose) also changes, increasing from about 5% to 9% by weight. This change is driven by auxin hormones such as indole-3-acetic acid. At the same time, acidity – largely from citric acid – decreases.

Those who’ve been paying attention might be putting a few things together at this point: as the strawberry ripens, it becomes less acidic, which helps to shift its colour from more green-yellow-orange towards those delicious-looking purpleish-reds. It’s also producing more furaneol, making it smell yummy, and its sugar content is increasing, making it lovely and sweet. Why is all this happening? Because the strawberry wants (as much as a plant can want) to be eaten, but only once it’s ripe – because that’s how its seeds get dispersed. Ripening is all about making the fruit more appealing – redder, sweeter, and nicer-smelling – to things that will eat it. Nature’s clever, eh?

There we have it: some spectacular strawberry science! As a final note, as soon as I started writing this I (naturally) found lots of other blogs about strawberries and summer berries in general. They’re all fascinating. If you want to read more, check out…


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. All content is © Kat Day 2018. You may share or link to anything here, but you must reference this site if you do.

If you enjoy reading my blog, please consider buying me a coffee (I might spend it on an extra punnet of strawberries, mind you) through Ko-fi using the button below.
Buy Me a Coffee at ko-fi.com

 

Where did our love of dairy come from?

The popularity of the soya latte seems to be on the rise.

A little while ago botanist James Wong tweeted about the myriad types of plant ‘milk’ that are increasingly being offered in coffee shops, none of which are truly milk (in the biological sense).

This generated a huge response, probably rather larger than he was expecting from an off-hand tweet. Now, I’m not going to get into the ethics of milk production because it’s beyond the scope of this blog (and let’s keep it out of the comments? — kthxbye) but I do want to consider one fairly long thread of responses which ran the gamut from ‘humans are the only species to drink the milk of another animal’ (actually, no) to ‘there’s no benefit to dairy’ (bear with me) and ending with, in essence, ‘dairy is slowly killing us‘ (complicated, but essentially there’s very little evidence of any harm).

Humans have been consuming dairy products for thousands of years.

But wait. If dairy is so terrible for humans, and if there are no advantages to it, why do we consume it at all? Dairy is not a new thing. Humans have been consuming foods made from one type of animal milk or another for 10,000 years, give or take. That’s really quite a long time. More to the point (I don’t want to be accused of appealing to antiquity, after all), keeping animals and milking them is quite resource intensive. You have to feed them, look after them and ensure they don’t wander off or get eaten by predators, not to mention actually milk them on a daily basis. All that takes time, energy and probably currency of some sort. Why would anyone bother, if dairy were truly detrimental to our well-being?

In fact, some cultures don’t bother. The ability to digest lactose (the main sugar in milk) beyond infancy is quite low in some parts of the world, specifically Asia and most of Africa. In those areas dairy is, or at least has been historically, not a significant part of people’s diet.

But it is in European diets. Particularly northern European diets. Northern Europeans are, generally, extremely tolerant of lactose into adulthood and beyond.

Which is interesting because it suggests, if you weren’t suspicious already, that there IS some advantage to consuming dairy. The ability to digest lactose seems to be a genetic trait. And it seems it’s something to do, really quite specifically, with your geographic location.

Which brings us to vitamin D. This vitamin, which is more accurately described as a hormone, is a crucial nutrient for humans. It increases absorption of calcium, magnesium and phosphate, which are all necessary for healthy bones (not to mention lots of other processes in the body). It’s well-known that a lack of vitamin D leads to weakened bones, and specifically causes rickets in children. More recently we’ve come to understand that vitamin D also supports our immune system; deficiency has been meaningfully linked to increased risk of certain viral infections.

What’s the connection between vitamin D and geographic location? Well, humans can make vitamin D in their skin, but we need a bit of help. In particular, and this is where the chemistry comes in, we need ultraviolet light. Specifically, UVB – light with wavelengths between 280 nm to 315 nm. When our skin is exposed to UVB, a substance called 7-dehydrocholesterol (7-DHC to its friends) is converted into previtamin D3, which is then changed by our body heat to vitamin D3, or cholecalciferol – which is the really good stuff. (There’s another form, vitamin D2, but this is slightly less biologically active.) At this point the liver and kidneys take over and activate the chloecalciferol via the magic of enzymes.

We make vitamin D in our skin when we’re exposed to UVB light.

How much UVB you’re exposed to depends on where you live. If you live anywhere near the equator, no problem. You get UVB all year round. Possibly too much, in fact – it’s also linked with skin cancers. But if you live in northerly latitudes (or very southerly), you might have a problem. In the summer months, a few minutes in the sun without sunscreen (literally a few minutes, not hours!) will produce more than enough vitamin D. But people living in UK, for example, get no UVB exposure for 6 months of the year. Icelanders go without for 7, and inhabitants of Tromsø, in Norway, have to get by for a full 8 months. Since we can only store vitamin D in our bodies for something like 2-4 months (I’ve struggled to find a consistent number for this, but everyone seems to agree it’s in this ballpark), that potentially means several months with no vitamin D at all, which could lead to deficiency.

In the winter northern Europeans don’t receive enough UVB light from the sun to produce vitamin D in their skin.

In the winter, northern Europeans simply can’t make vitamin D3 in their skin (and for anyone thinking about sunbeds, that’s a bad idea for several reasons). In 2018, this is easily fixed – you just take a supplement. For example, Public Health England recommends that Brits take a daily dose of 10 mcg (400 IU) of vitamin D in autumn and winter, i.e. between about October and March. It’s worth pointing out at this point that a lot of supplements you can buy contain much more than this, and more isn’t necessarily better. Vitamin D is fat-soluble and so it will build up in the body, potentially reaching toxic levels if you really overdo things. Check your labels.

Oily fish is an excellent source of vitamin D.

But what about a few thousand years ago, before you just could pop to the supermarket and buy a bottle of small tablets? What did northern Europeans do then? The answer is simple: they had to get vitamin D from their food. Even if it’s not particularly well-absorbed, it’s better than nothing.

Of couse it helps if you have access to lots of foods which are sources of vitamin D. Which would be…  fatty fish (tuna, mackerel, salmon, etc) – suddenly that northern European love of herring makes so much more sense – red meat, certain types of liver, egg yolks and, yep, dairy products. Dairy products, in truth, contain relatively low levels of vitamin D (cheese and butter are better than plain milk), but every little helps. Plus, they’re also a good source of calcium, which works alongside vitamin D and is, of course, really important for good bone health.

A side note for vegans and vegetarians: most dietry sources of vitamin D come from animals. Certain mushrooms grown under UV can be a good source of vitamin D2, but unless you’re super-careful a plant-based diet won’t provide enough of this nutrient. So if you live in the north somewhere or you don’t, or can’t, expose your skin to the sun very often, you need a supplement (vegan supplements are available).

Fair skin likely emerged because it allows for better vitamin D production when UVB levels are lower.

One thing I haven’t mentioned of course is skin-colour. Northern Europeans are generally fair-skinned, and this is vitamin D-related, too. The paler your skin, the better UVB penetrates it. Fair-skinned people living in the north had an advantage over those with darker skin in the winter, spring and autumn months: they could produce more vitamin D. In fact, this was probably a significant factor in the evolution of fair skin (although, as Ed Yong explains in this excellent article, that’s complicated).

In summary, consuming dairy does have advantages, at least historically. There’s a good reason Europeans love their cheeses. But these days, if you want to eat a vegan or vegetarian diet for any reason (once again, let’s not get into those reasons in comments, kay?) you really should take a vitamin D supplement. In fact, Public Health England recommends that everyone in the UK take a vitamin D supplement in the autumn and winter, but only a small amount – check your dose.

By the way, if you spot any ‘diary’s let me know. I really had to battle to keep them from sneaking in…

Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. All content is © Kat Day 2018. You may share or link to anything here, but you must reference this site if you do.

If you enjoy reading my blog, please consider buying me a coffee through Ko-fi using the button below. Black, though – I like dairy, just not in my coffee!

Buy Me a Coffee at ko-fi.com

Chemical du jour: how bad is BPA, really?

BPA is an additive in many plastics

When I was writing my summary of 2017 I said that there would, very probably, be some sort of food health scare at the start of 2018. It’s the natural order of things: first we eat and drink the calorie requirement of a small blue whale over Christmas and New Year, and then, lo, we must be made to suffer the guilt in January. By Easter, of course, it’s all forgotten and we can cheerfully stuff ourselves with chocolate eggs.

Last year it was crispy potatoes, and the year before that it was something ridiculous about sugar in ketchup causing cancer (it’s the same sugar that’s in everything, why ketchup? Why?). This year, though, it seems that the nasty chemical of the day is not something that’s in our food so much as around it.

Because this year the villain of the piece appears to be BPA, otherwise known as Bisphenol A or, to give it its IUPAC name, 4,4′-(propane-2,2-diyl)diphenol.

BPA is an additive in plastics. At the end of last year an excellent documentary aired on the BBC called Blue Planet II, all about our planet’s oceans. It featured amazing, jaw-dropping footage of wildlife. It also featured some extremely shocking images of plastic waste, and the harm it causes.

Plastic waste is a serious problem

Plastic waste, particularly plastic waste which is improperly disposed of and consequently ends up in the wrong place, is indisputably something that needs to be addressed. But this highlighting of the plastic waste problem had an unintended consequence: where was the story going to go? Everyone is writing about how plastic is bad, went (I imagine) editorial meetings in offices around the country – find me a story showing that plastic is even WORSE than we thought!

Really, it was inevitable that a ‘not only is plastic bad for the environment, but it’s bad for you, too!’ theme was going to emerge. It started, sort of, with a headline in The Sun newspaper: “Shopping receipts could ‘increase your cancer risk’ – as 93% contain dangerous chemicals also linked to infertility. Shopping receipts are, of course, not made of plastic – but the article’s sub-heading stated that “BPA is used to make plastics”, so the implication was clear enough.

Then the rather confusing: “Plastic chemical linked to male infertility in majority of teenagers, study suggests” appeared in The Telegraph (more on this in a bit), and the whole thing exploded. Search for BPA in Google News now and there is everything from “5 Ways to Reduce Your Exposure to Toxic BPA” to “gender-bending chemicals found in plastic and linked to breast and prostate cancer are found in 86% of teenagers”.

Yikes. It’s all quite scary. It’s true that right now you can’t really avoid plastic. Look around you and it’s likely that you’ll immediately see lots of plastic objects, and that’s before you even try to consider all the everyday things which have plastic coatings that aren’t immediately obvious. If you have young children, you’re probably drowning in plastic toys, cups, plates and bottles. We’re pretty much touching plastic continually throughout our day. How concerned should we be?

As the Hitchiker’s Guide to the Galaxy says, Don’t Panic. Plastic (like planet Earth in the Guide) can probably be summed up as mostly harmless, at least from a BPA point of view if not an environmental one.

BPA is a rather pleasingly symmetrical molecule with two phenol groups. (A big model of this would make a wonderfully ironic pair of sunglasses, wouldn’t it?) It was first synthesized by the Russian chemist Alexander Dianin in the late 19th century. It’s made by reacting acetone – which is where the “A” in the name comes from – with two phenol molecules. It’s actually a very simple reaction, although the product does need to be carefully purified, since large amounts of phenol are used to ensure a good yield.

It’s been used commercially since the fifties, and millions of tonnes of BPA are now produced worldwide each year. BPA is used to make plastics which are clear and tough – two characteristics which are often valued, especially for things like waterproof coatings, bottles and food containers.

The concern is that BPA is an endocrine disruptor, meaning that it interferes with hormone systems. In particular, it’s a known xenoestrogen, in other words it mimics the female hormone estrogen. Animal studies have suggested possible links to certain cancers, infertility, neurological problems and other diseases. A lot of the work is fairly small-scale and, as I’ve mentioned, focused on animal studies (rather than looking directly at effects in humans). Where humans have been studied it’s usually been populations that are exposed to especially high BPA levels (epoxy resin painters, for example). Still, it builds up into quite a damning picture.

BPA has been banned from baby bottles in many countries, including the USA and Europe

Of course, we don’t normally eat plastic, but BPA can leach from the plastic into the food or drink that’s in the plastic, and much more so if the plastic is heated. Because of these concerns, BPA has been banned from baby bottles (which tend to be heated, both for sterilisation and to warm the milk) in several countries, including the whole of Europe, for some years now. “BPA free” labels are a fairly common sight on baby products these days. BPA might also get onto our skin from, for example, those thermal paper receipts The Sun article mentioned, and then into our mouths when we eat. Our bodies break down and excrete the chemical fairly quickly, in as little as 6 hours, but because it’s so common in our environment most of us are continually meeting new sources of it.

How much are we getting, though? This is a critical question, because as I’m forever saying, the dose makes the poison. Arsenic is a deadly poison at high levels, but most of us – were we to undergo some sort of very sensitive test – would probably find we have traces of it in our systems, because it’s a naturally-occuring mineral. It’s nothing to worry about, unless for some reason the levels become too high.

When it comes to BPA, different countries have different guidelines. The European Food Safety Authority recommended in January 2015 that the TDI (tolerable daily intake) should be reduced from 50 to 4 µg/kg body weight/day (there are plans for a new assessment in 2018, so it might change again). For a 75 kg adult, that translates to about 0.0003 g per day. A USA Federal Drug and Administration document from 2014 suggests a NOAEL (no-observed-adverse-effect-level) of 5 mg/kg bw/day, which translates to 0.375 g per day for the same 75 kg adult. NOAEL values are usually much higher than TDIs, so these two figures aren’t as incompatible as they might appear. Tolerable daily intake values tend to have a lot of additional “just in case” tossed into them – being rather more guidance than science.

The European Food Standards Authority published a detailed review of the evidence in 2015 (click for a summary)

So, how much BPA are we exposed to? I’m going to stick to Europe, because that’s where I’m based (for now…), and trying to look at all the different countries is horribly complicated. Besides, EFSA produced a really helpful executive summary of their findings in 2015, which makes it much easier to find the pertinent information.

The key points are these: most of our exposure comes from food. Infants, children and adolescents have the highest dietary exposures to BPA, probably because they eat and drink more per kilogram of body weight. The estimated average was 0.375 µg/kg bw per day.  For adult women the estimated average was 0.132 µg/kg bw per day, and for men it was 0.126 µg/kg bw per day.

When it came to thermal paper and other non-dietary exposure (mostly from dust, toys and cosmetics), the numbers were smaller, but the panel admitted there was a fair bit of uncertainty here. The total exposure from all sources was somewhere in the region of 1 µg/kg bw per day for all the age groups, with adolescents and young children edging more toward values of 1.5 µg/kg bw per day (this will be important in a minute).

Note that all of these numbers are significantly less than the, conservative, tolerable daily intake value of 4 µg/kg bw per day recommended by EFSA.

Here’s the important bit: the panel concluded that there is “no health concern for BPA at the estimated levels of exposure” as far as diet goes. They also said that this applied “to prenatally exposed children” (in other words, one less thing for pregnant women to worry about).

When it came to total exposure, i.e. diet and exposure from other sources such as thermal paper they concluded that “the health concern for BPA is low at the estimated levels of exposure”.

The factsheet that was published alongside the full document summarises the results as follows: “BPA poses no health risk to consumers because current exposure to the chemical is too low to cause harm.”

Like I said: Don’t Panic.

What about those frankly quite terrifying headlines? Well, firstly The Sun article was based on some work conducted on a grand total of 208 receipts collected in Southeast Michigan in the USA from only 39 unique business locations. That’s a pretty small sample and not, I’d suggest, perhaps terribly relevant to the readership of a British newspaper. Worse, the actual levels of BPA weren’t measured in the large majority of samples – they only tested to see if it was there, not how much was there. There was nothing conclusive at all to suggest that the levels in the receipts might be enough to “increase your cancer risk”. All in all, it was pretty meaningless. We already knew there was BPA in thermal receipt paper – no one was hiding that information (it’s literally in the second paragraph of the Wikipedia page on BPA).

The Telegraph article, and the many others it appeared to spawn, also weren’t based on especially rigorous work and, worse, totally misrepresented the findings in any case. Firstly, let’s consider that headline: “Plastic chemical linked to male infertility in majority of teenagers, study suggests”. What does that mean? Are they suggesting that teenagers are displaying infertility? No, of course not. They didn’t want to put “BPA” in the headline because that, apparently, would be too confusing for their readers. So instead they’ve replaced “BPA” with “plastic chemical linked to male infertility”, which is so much more straightforward, isn’t it?

And they don’t mean it’s linked to infertility in the majority of teenagers, they mean it’s linked to infertility and it’s in the majority of teenager’s bodies. I do appreciate that journalists rarely write headlines – this isn’t a criticism of the poor writer who turned in perfectly good copy – but that is confusing and misleading headline-writing of the highest order. Ugh.

Plus, as I commented back there, that wasn’t even the conclusion of the study, which was actually an experiment carried out by students under the supervision of a local university. The key finding was not that, horror, teenagers have BPA in their bodies. The researchers assumed that almost all of the teenagers would have BPA in their bodies – as the EFSA report showed, most people do. No, the conclusion was actually that the teenagers – 94 of them – had been unable to significantly reduce their levels of BPA by changing their diet and lifestyle. Although the paper admits the conditions weren’t well-controlled. Basically, they asked a group of 17-19 year-olds to avoid plastic, and worked on the basis that their account of doing so was accurate.

And how much did the teenagers have in their samples? The average was 1.22 ng/ml, in urine samples (ng = nanogram). Now, even if we assume that these levels apply to all human tissue (which they almost certainly don’t) and that therefore the students had roughly 1.22 ng per gram of body weight, that only translates to, very approximately, 1.22 micrograms (µg) per kilogram of body weight.

Wait a second… what did EFSA say again…. ah yes, they estimated total exposures of 1.449 µg/kg bw per day for adolescents.

Sooooo basically a very similar value, then? And the EFSA, after looking at multiple studies in painstaking detail, concluded that “BPA poses no health risk to consumers”.

Is this grounds for multiple hysterical, fear-mongering headlines? I really don’t think it is.

It is interesting that the teenagers were unable to reduce their BPA levels. Because it’s broken down and excreted quite quickly by the body, you might expect that reducing exposure would have a bigger effect – but really all we can say here is that this needs to be repeated with far more tightly-controlled conditions. Who knows what the students did, and didn’t, actually handle and eat. Perhaps their school environment contains high levels of BPA in dust for some reason (new buildings or equipment, maybe?), and so it was virtually impossible to avoid. Who knows.

In summary, despite the scary headlines there really is no need to worry too much about BPA from plastics or receipts. It may be worth avoiding heating plastic, since we know that increases the amound of BPA that makes its way into food – although it’s important to stress that there’s no evidence that microwaving plastic containers causes levels to be above safe limits. Still, if you wanted to be cautious you could choose to put food into a ceramic or glass bowl, covered with a plate rather than clingfilm. It’ll save you money on your clingfilm bills anyway, and it means less plastic waste, which is no bad thing.

Roll on Easter…


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. All content is © Kat Day 2018. You may share or link to anything here, but you must reference this site if you do.


All comments are moderated. Abusive comments will be deleted, as will any comments referring to posts on this site which have had comments disabled.

Puking pumpkins: more hydrogen peroxide

It was Halloween yesterday and, unusually for the UK, it fell in school term time. As it turned out, I was teaching chemistry to a group of 12-13 year olds on that day which was too good an opportunity to miss.

Time for the puking pumpkin!

A side note: there’s loads of great chemistry here, and the pumpkin isn’t essential – you could easily do this same experiment during a less pumpkin-prolific month with something else. Puking watermelon, anyone?

Carve a large mouth, draw the eyes and nose with marker pen.

First things first, prepare your pumpkin! Choose a large one – you need room to put a conical flask inside and put the pumpkin’s “lid” securely back in place.

Carve the mouth in the any shape you like, but make it generous. Draw the eyes and nose (and any other decoration) in waterproof marker – unless you want your pumpkin to “puke” out of its nose and eyes as well!

Rest the pumpkin on something wipe-clean (it might leak from the bottom) and put a deep tray in front of it.

To make the “puke” you will need:

  • 35% hydrogen peroxide (corrosive)
  • a stock solution of KI, potassium iodide (low hazard)
  • washing up liquid

The puking pumpkin!

You can also add food colouring or dye, but be aware that the reaction can completely change or even destroy the colours you started with. If colour matters to you, test it first.

Method:

  1. Place about 50 ml (use more if it’s not so fresh) of the hydrogen peroxide into the conical flask, add a few drops of washing up liquid (and dye, if you’re using it).
  2. Add some KI solution and quickly put the pumpkin’s lid back in place.
  3. Enjoy the show!

Check out some video of all this here.

What’s happening? Hydrogen peroxide readily decomposes into oxygen and water, but at room temperature this reaction is slow. KI catalyses the reaction, i.e. speeds it up. (There are other catalysts you could also try if you want to experiment; potassium permanganate for example.) The washing up liquid traps the oxygen gas in foam to produce the “puke”.

The word and symbol equations are:

hydrogen peroxide –> water + oxygen
2H2O2 –> 2H2O + O2

There are several teaching points here:

  • Evidence for chemical change.
  • Compounds vs. elements.
  • Breaking the chemical bonds in a compound to form an element and another compound.
  • Balanced equations / conservation of mass.
  • The idea that when it comes to chemical processes, it’s not just whether a reaction happens that matters, but also how fast it happens…
  • … which of course leads to catalysis. A-level students can look at the relevant equations (see below).

Once the pumpkin has finished puking, demonstrate the test for oxygen gas.

Some health and safety points: the hydrogen peroxide is corrosive so avoid skin contact. Safety goggles are essential, gloves are a Good Idea(™). The reaction is exothermic and steam is produced. A heavy pumpkin lid will almost certainly stay in place but still, stand well back. 

But we’re not done, oh no! What you have at the end of this reaction is essentially a pumpkin full of oxygen gas. Time to crack out the splints and demonstrate/remind your students of the test for oxygen. It’s endlessly fun to put a glowing splint into the pumpkin’s mouth and watch it catch fire, and you’ll be able to do it several times.

And we’re still not done! Once the pumpkin has completely finished “puking”, open it up (carefully) and look inside. Check out that colour! Why is it bluish-black in there?

The inside of the pumpkin is blue-black: iodine is produced which complexes with starch.

It turns out that you also get some iodine produced, and there’s starch in pumpkins. It’s the classic, blue-black starch complex.

Finally, give the outside of the pumpkin a good wipe, take it home, carve out the eyes and nose and pop it outside for the trick or treaters – it’s completely safe to use.

Brace yourselves, more equations coming…

The KI catalyses the reaction because the iodide ions provide an alternative, lower-energy pathway for the decomposition reaction. The iodide reacts with the hydrogen peroxide to form hypoiodite ions (OI). These react with more hydrogen peroxide to form water, oxygen and more iodide ions – so the iodide is regenerated, and hence is acting as a catalyst.

H2O2 + I –> H2O + OI
H2O2 + OI –> H2O + O2 + I

The iodine I mentioned comes about because some of the iodide is oxidised to iodine by the oxygen. At this point we have both iodine and iodide ions – these combine to form triiodide, and this forms the familiar blue-black complex.

Phew. That’s enough tricky chemistry for one year. Enjoy your chocolate!

Trick or treat!

 


 

 


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. All content is © Kat Day 2017. You may share or link to anything here, but you must reference this site if you do.


All comments are moderated. Abusive comments will be deleted, as will any comments referring to posts on this site which have had comments disabled.

Hydrogen peroxide: another deadly alternative?

I’m sure most people have heard of hydrogen peroxide. It’s used as a disinfectant and, even if you’ve never used it for that, you probably at least know that it’s used to bleach hair. It’s where the phrase “peroxide blonde” comes from, after all. Hydrogen peroxide, and its formula, is so famous that there’s an old chemistry joke about it:

(I have no idea who to credit for the original drawing – if it’s you, leave me a message.)

To save you squinting at the text, it goes like this:
Two men walk into a bar. The first man says, “I’ll have some H2O.”
The second man says, “I’ll have some H2O, too.”
The barman brings the drinks. The second man dies horribly.

Now I think about it, it’s not a terribly funny joke.

Hydrogen peroxide has an extra oxygen atom in the middle.

Never mind. You get the idea. H2O2 (“H2O, too”) is the formula for hydrogen peroxide. Very similar to water’s formula, except with an extra oxygen atom in the middle. In fact, naturopaths – purveyors of alternative therapies – often refer to hydrogen peroxide as “water with extra oxygen”. But this is really misleading because, to torture a metaphor, that extra oxygen makes hydrogen peroxide the piranha to water’s goldfish.

Water, as we know, is pretty innocuous. You should try not to inhale it obviously, or drink more than about six litres in one go, but otherwise, its pretty harmless. Hydrogen peroxide, on the other hand, not so much. The molecule breaks apart easily, releasing oxygen. That makes it a strong oxidising agent. It works as a disinfectant because it basically blasts cells to pieces. It bleaches hair because it breaks down pigments in the hair shaft. And, as medical students will tell you, it’s also really good at cleaning up blood stains – because it oxidises the iron in haemoglobin to Fe3+, which is a pale yellow colour*.

Dilute hydrogen peroxide is readily available.

In its dilute form, hydrogen peroxide is a mild antiseptic. Three percent and even slightly more concentrated solutions are still readily available in high-street pharmacies. However, even these very dilute solutions can cause skin and eye irritation, and prolonged skin contact is not recommended. The trouble is, while it does destroy microbes, it also destroys healthy cells. There’s been a move away from using hydrogen peroxide for this reason, although it is still a popular “home” remedy.

More concentrated** solutions are potentially very dangerous, causing severe skin burns. Hydrogen peroxide is also well-known for its tendency to react violently with other chemicals, meaning that it must be stored, and handled, very carefully.

All of which makes the idea of injecting into someone’s veins particularly horrific.

But this is exactly what some naturopaths are recommending, and even doing. The idea seems to have arisen because hydrogen peroxide is known to damage cancer cells. But so will a lot of other dangerous substances – it doesn’t mean it’s a good idea to inject them. Hydrogen peroxide is produced by certain immune cells in the body, but only in a very controlled and contained way. This is definitely a case where more isn’t necessarily better.

The use of intravenous hydrogen peroxide appears to have begun in America, but it may be spreading to the UK. The website yestolife.org.uk, which claims to empower people with cancer to “make informed decisions”, states “The most common form of hydrogen peroxide therapy used by doctors calls for small amounts of 30% reagent grade hydrogen peroxide added to purified water and administered as an intravenous drip.”

30% hydrogen peroxide is really hazardous stuff. It’s terrifying that this is being recommended to vulnerable patients.

Other sites recommend inhaling or swallowing hydrogen peroxide solutions, both of which are also potentially extremely dangerous.

If anyone ever suggests a hydrogen peroxide IV, run very fast in the other direction.

In 2004 a woman called Katherine Bibeau died after receiving intravenous hydrogen peroxide treatment from James Shortt, a man from South Carolina who called himself a “longevity physician”. According to the autopsy report she died from systemic shock and DIC – the formation of blood clots in blood vessels throughout the body. When her body arrived at the morgue, she was covered in purple-black bruises.

Do I need to state the obvious? If anyone suggests injecting this stuff, run. Run very fast, in the other direction. Likewise if they suggest drinking it. It’s a really stupid idea, one that could quite literally kill you.


* As anyone who’s ever studied chemistry anywhere in my vicinity will tell you, “iron three is yellow, like wee.”


** The concentration of hydrogen peroxide is usually described in one of two ways: percentage and “vol”. Percentage works as you might expect, but vol is a little different. It came about for practical, historical reasons. As Prof. Poliakoff comments in this video, hydrogen peroxide is prone to going “flat” – leave it in the bottle for long enough and it gradually decomposes until what you actually have is a bottle of ordinary water. Particularly in the days before refrigeration (keeping it cold slows down the decomposition) a bottle might be labelled 20%, but actually contain considerably less hydrogen peroxide.

What to do? The answer was quite simple: take, say, 1 ml of hydrogen peroxide, add something which causes it to decompose really, really fast (lots of things will do this: potassium permanganate, potassium iodide, yeast, even liver) and measure the volume of oxygen given off. If your 1 ml of hydrogen peroxide produces 10 ml of oxygen, it’s 10 vol. If it produces 20, it’s 20 vol. And so on. Simple. 3% hydrogen peroxide, for the record, is about 10 vol***. Do not mix up these numbers.


*** Naturally, there are mole calculations to go with this. Of course there are. For A-level Chemists, here’s the maths (everyone else can tune out; I’m adding this little footnote because I found this information strangely hard to find):

Hydrogen peroxide decomposes as shown in this equation:
2H2O2 –> 2H2O + O2

Let’s imagine we decompose 1 ml of hydrogen peroxide and obtain 10 mls of oxygen.

Assuming the oxygen gas occupies 24 dm3 (litres), or 24000 mls, at standard temperature and pressure, 10 mls of oxygen is 10 / 24000 = 0.0004167 moles. But, according to the equation, we need two molecules of hydrogen peroxide to make one molecule of oxygen, so we need to multiply this number by two, giving us 0.0008333 moles.

To get the concentration of the hydrogen peroxide in the more familar (to chemists, anyway) mol dm-3, just divide that number of moles by the volume of hydrogen peroxide. In other words:

0.0008333 mols / 0.001 dm3 = 0.833 mol dm-3

If you really want to convert this into a percentage by mass (you can see why people stick with “vol” now, right?), then:

0.833 mol (in the litre of water) x 34 g mol-1 (the molecular mass of H2O2)
= 28.32 g (in 1000 g of water)

Finally, (28.32 / 1000) x 100 = 2.8% or, rounding up, 3%

In summary (phew):
10 vol hydrogen peroxide = 0.83 mol dm-3 = 3%


Like the Chronicle Flask’s Facebook page for regular updates, or follow @chronicleflask on Twitter. All content is © Kat Day 2017. You may share or link to anything here, but you must reference this site if you do.


All comments are moderated. Abusive comments will be deleted, as will any comments referring to posts on this site which have had comments disabled.