Long time no post; what’s new? We finally finished a healthy-dessert cookbook without sugar/dairy/wheat. Writing the book is fun, but the other 90% (all the nitty-gritty of releasing a book) take a bit longer.. We’re sharing some stories from behind the scene at the JJCookbooks blog.
I’ve been asked how to define/find good food. We must navigate through a minefield of cheap and dangerous fakes.
Three simple guidelines:
- If it is a single-ingredient, unprocessed item (whole melon, not already sliced) – go for it!
- If it claims health benefits/vitamins on the packaging – be skeptical!
- If grandma wouldn’t recognize the ingredients (long chemical names) – avoid!
- Anything from the fruit/veggie section is generally good (but those in EWG’s “Dirty Dozen” list are full of pesticides and should be avoided unless organic).
- Bread from “healthy whole grains fortified with vitamins” – sounds good, but synthetic vitamins are often counterproductive and downright harmful. Berries and green vegetables don’t need a marketing department because everyone agrees they are good!
- Dried mango is fine (though high sugar), but “sodium [meta]bisulfite” isn’t. It’s important to avoid preservatives/artificial coloring/taste ‘enhancers’.
- Fruit: organic! berries/apples/oranges; kiwi/passionfruit/banana; occasional pineapple/watermelon.
- Nuts/seeds: walnut/pecan/cashew/almond; sunflower seed/pistachio/pine nut (all need soaking)
- Beans: Lentil/chickpea (all need soaking)
- Non-starchy vegetables: almost anything! Celery/capsicum should be organic. Try to maximize colors!
- Starch: sweet potatoes should replace potatoes; cooking bananas (plantains) also OK.
- Meat: antibiotic and hormone-free, e.g. from The Barbie Girls. Cannot do beef; lamb is generally better raised than chicken/pork.
- Sugar: avoid, replace with honey/maple syrup.
- Grains: avoid gluten (wheat and others); some oats OK.
- Rice occasionally OK; brown rice (with hulls) has more nutrients but also more arsenic; white rice pretty much ’empty calories’ without benefit.
- Meat: farmed salmon and cheap chicken/pork very bad. Better to eat 5x more expensive meat 1/5th as often.
- Dairy: quite bad _as currently produced_. Goat milk is better, cheese+yogurt sometimes OK.
- Oil: cheapo vegetable oil very bad, but used in almost all commercial food.
In short, 99.9% of ready-made food violates these guidelines, hence we make everything from scratch. Vegetables and sweet potatoes always have a place, though!
Google Scholar reports 303,000 hits for ‘GPU’! Are we really seeing enough benefits to justify the precious R&D time sacrificed on that altar?
Taking a step back, the mainstream architectures include FPGAs (programmable hardware), GPUs (simplified and specialized co-processors) and general-purpose CPUs with SIMD. It is important to choose wisely because switching between them is expensive.
FPGAs are probably the fastest option for integer operations. Algorithms can be cast into specially designed hardware that provides exactly the operations required. Against the objection that the FPGA structure is less efficient than truly hard-wired circuits or signal processors, we note that some FPGAs actually do contain fixed-function signal-processing hardware or even full CPUs. However, despite the existence of C-to-Hardware-Design-Language translators/compilers, development is still slow. Implementing and debugging a new feature involves a comparatively lengthy synthesis process, and the tools remain sub-optimal. For R&D work, flexibility and rapid prototyping are more important. Casting them into hardware diverts development resources. Fast develop/debug/test cycles, and automated self-tests are more helpful. As a consequence, an initial software version should be developed anyway.
The more difficult question is whether to grow that into a GPU solution, or remain in the CPU. The boundaries are somewhat blurred as both architectures continue to converge. GPUs are nearly full-featured processors and have begun integrating caches, whereas CPUs are moving closer to GPU-scale parallelism (SIMD, multicore, hyperthread).
However, one point in favor of current CPUs is that they come with far greater amounts of memory. Some algorithms require multiple gigabytes, which exceeds the capacity of most GPUs (though what memory they do have is very fast). Although high-end GPUs have recently been introduced by AMD and Nvidia, these are niche offerings. Deployment will not come cheap (about $3000) and is likely to involve vendor lock-in.
These well-known points aside, the more interesting topic concerns the programming model of GPUs vs CPUs. Although the underlying hardware is increasingly similar, GPUs expose a simplified “SIMT” model that hides the ‘ugly’ details of SIMD. Programs are written to process a single element at a time and the compiler and hardware magically coalesce these into vector operations. By contrast, CPUs require programmers to deal with and be aware of the vector size. Thus, the argument goes, GPUs are “easier” to program.
The Emperor’s New Clothes: Performance portability
At conferences since 2008, I have seen speedups of 10-100x proudly reported. However, the amount of ‘tuning’ the code to a particular GPU is quite astonishing to this veteran of assembly language programming. Minutiae such as memory bank organization, shared-multiprocessor occupancy, register vs. shared memory, and work group size are enthusiastically discussed, even though they change every six months.
“Performance portability” (the hope that a program will run similarly fast on a different GPU) remains an open research problem, with some disappointing results so far:
“portable performance of three OpenCL programs is poor, generally achieving a low percentage of peak performance (7.5%–40% of peak GFLOPS and 1.4%-40.8% of peak bandwidth)” [Improving Performance Portability in OpenCL Programs]
There is some hope that auto-tuning (automatically adapting certain parameters to the GPU at runtime and seeing what works best) can help. In a way, this is already an admission of defeat because the entire approach is based on tweaking values and merely observing what happens. However, for non-toy problems it becomes difficult to expose parameters beyond trivial ones such as thread count:
“vulnerability of auto-tuning: as optimizations are architecture-specific, they may be easily overseen or considered “not relevant” when developing on different architectures. As such, performance of a kernel must be verified on all potential target architectures. We conclude that, while being technically possible, obtaining performance portability remains time-consuming. [An Experimental Study on Performance Portability of OpenCL Kernels]
Chained to the treadmill
This sounds quite unproductive. Rather than develop better-quality algorithms, we have chained ourselves to the treadmill of tweaking the code for every new generation of GPU. By contrast, SIMD code written for the SSSE3 instruction set of 2006 remains just as valid eight years later. Some instructions are now slightly faster, but I am not aware of any regressions. When 64-bit reached the mainstream, a recompile gave access to more registers. When AVX was introduced, an optional recompile gave access to its more efficient instruction encodings. Even the much-maligned dependency on SIMD width often only requires changing a constant (integrated into vector class wrappers around the SIMD intrinsics).
By contrast, CUDA has seen 16 releases since 2007, with major changes to the underlying hardware, programming model and performance characteristics. With so many toolchain updates and tweaks for new hardware, it would seem that SIMD is actually less troublesome in practice.
More subtle flaws
Moreover, the major ‘convenience’ of SIMT, hiding the underlying hardware’s SIMD width, is actually a hindrance. Although cross-lane operations are often questionable (inner products should generally not be computed horizontally within an SIMD register), they should not be banished from the programming model. For example, a novel SIMD entropy coder reaches 2 GB/s throughput (faster than GPUs) by packing registers horizontally.
Another lesser-known limitation of the GPU hardware stems from its graphics pedigree. Most values were 32-bit floats, so the hardware does not provide more lanes/arithmetic units for smaller 8 or 16-bit pixels. Even the memory banks are tailored specifically to 32-bit values; 8-bit writes would encounter more bank conflicts.
Perhaps the biggest flaw with GPGPU is that it is an all-or-nothing proposition. The relatively slow (far slower than CPU memory bandwidth) transfers between host and device over PCIe effectively require ALL of the processing to be done on the GPU, even if an algorithm is not suitable for it (perhaps due to heavy branching or memory use).
With a background in hardware architectures and experience in OpenCL as well as SIMD, it is difficult to understand the continued enthusiasm for GPUs. There is always a trade-off between brevity/elegance and performance. Tuning kernels, or even devising an auto-tuning approach, has real costs in terms of development time. In my experience, the cost of developing, testing, debugging and maintaining a large set of GPU kernels is shockingly high, especially if separate CUDA and OpenCL paths are needed (due to persistently lower OpenCL performance on NVidia hardware). It is surprising that GPUs are still viewed as a silver bullet, rather than as a tool with strengths and weaknesses. When considering the sum of all investments in development time and hardware cost, the benefits of GPUs begin to fade.
By contrast, it is faster and easier to design and develop a system that begins with software prototypes, and incrementally speeds up time-critical portions via SIMD and task-based parallelization (ideally by merely adding high-level source code annotations such as OpenMP or Cilk). This solution remains flexible and does not require an all-or-nothing investment. In my experience, 10-100x speedups are usually also achievable in software, with much less volatility across hardware generations.
For real-time (video editing) systems that really must have more bandwidth than even a dual-CPU system can provide, perhaps GPUs are still the better tradeoff. There is anecdotal evidence that the costs of FPGA development are even higher (though this may change as FPGA tools improve). However, I really hope scientists and engineers can escape the temptation of quick publications from adapting yesterday’s techniques to today’s hot new GPU. There are some hard questions that need asking:
- Is the real-world application already viable without ‘flashy’ speedups?
- Is the approach limited to toy problem sizes due to GPU memory?
- Will it be invalidated by the next hardware generation anyway?
- Is the GPU already over-taxed by other parts of the application?
- Will sending data over PCIe erode the speed gains?
- Does the total development and hardware cost eclipse any benefit to the bottom line?
If so, perhaps other worthy avenues of research can be explored.
Christmas season means lots of baking! Time to share two quick lessons I have
learned over the past few months.
Muffins speed up baking time considerably – 15-20 minutes versus 60-90 minutes. They solidify faster than cakes because the volume/surface ratio is lower (more dough is closer to an edge). The results can still be nice and moist..
Potential problem: how to remove muffins and wash the molds? I’ve found food-grade silicon molds to be very effective – we can push the bottom and the muffins pop right out. They’re also super-easy to clean by pouring on some boiling water.
A broader question: is it safe? The temperature range is -40 to 200 degC ; I usually bake at 160-170. There is some concern about volatile organic compounds (organosiloxanes) migrating from the molds into food . This can be minimized by ‘tempering’ or ‘curing’ the molds before use . Simply put them (empty) into the 200 degC circulating oven for 2-4 hours.
Note that the molds also absorb some fats into their internal structure (cleaning doesn’t help), and these will eventually go rancid . Better to use stable saturated fats (coconut oil, ghee) that don’t oxidize so quickly . When the molds start to smell, it’s time to replace them..
Grain-free? Works fine!
Skipping wheat and grains makes a noticeable difference for me  but that need not hold back the baking. As a general template, I am having considerable success with nut flour (almond/cashew), ghee/coconut oil, and the main filler: banana/carrots/zucchini/beets/corn. It is very interesting to note that vegetables don’t taste as such. Beets make for excellent red velvet chocolate muffins with the added hidden bonus of vegetable goodness (phytonutrients).
1: US patent 6,197,359
Are you interested in software code signing? If so, here’s the twisting tale of my recent interaction with Symantec, in the hope that it is useful to someone.
Extending a valid certificate with the same credit card and still valid passport? Someone might be impersonating me, so a notary is required (again) to confirm my identity. Somewhat understandable.
The notarization (seal/stamp) is to be scanned and sent by email (upon hearing this, the notary was shocked – anyone could forge it). So the whole process is a farce known as “security theater”. So be it.
The notary’s commission expired the same day I came. Bad luck, but I made sure to send off the document that same day.
Symantec objects: when using a German passport for identification, the notary’s address must be in Germany. The case of an expat living aboard is only allowed if they use secondary ID documents issued by the current country (though no mention is made of this).
Symantec reconsiders: they can accept a Singaporean notary if I cancel the order and create a new one.
Symantec finds fault with the now-expired notary commission. Although it proved my existence on that day, the expired stamp is now considered worthless. Another notarization is needed. As a compromise, they offer to absorb the cost of the second notarization into the purchase price. This seems fair!
I re-notarize the document. Some days later, Symantec sends word that the order is “complicated” and their senior team is investigating.
A few days later, I am contacted by a Symantec employee who wishes to confirm my place of residence and whether all the data are correct.
Soon after, the certificate is issued and works!
However, the agreement has changed [Darth Vader style]. There will not be a further rebate because the discounted price of $173 is already much lower than their usual price.
If such Kafkaesque bureaucracy and flip-flopping is a regular occurrence, I can actually understand why they might want to charge > $400 per year to issue a certificate. After all, this odyssey involved no less than 5 Symantec employees.
However, Comodo seems to be able to do it for much less (around $80 per year). Under German law, there is a case to be made for Symantec’s full price being illegal price gouging, because its cost is “noticeably disproportionate to the services rendered”.
That very interesting point aside, perhaps there are more hassle-free alternatives (this undertaking cost several hours). If you’ve dealt with any other vendors for kernel-mode code signing certificates, I’d love to hear your story via email. If dealing with Symantec in future, beware of the country-of-residence issue. Hope that helps!
After casting a wide net, I found many C++ UIs, but they are generally bulky and often want to grab the main loop (big no-no).
- Large: Qt, WxWidgets, cinder, CEGUI, ultimatepp, clutter, IUP, JUCE.
- Medium: Agar, Fox, FLTK, GWEN (seemed the best of these).
- Small but not terribly attractive: GLUI2, GLV, Turska, STB imgui, simgui.
Capturing the main loop is unacceptable. In the case of Qt, two QGraphicsScenes will each want to vsync, which halves the framerate. Instead, I have a very nice loop that never(!) blocks the main thread, wakes up immediately after a Windows message is received, and waits for vsync with <1% CPU use. (This is hidden behind the SDL interface for portability.)
The approach of AntTweakBar – providing controls that modify YOUR variables, rather than caching everything inside some deep class hierarchy, looked nice. Unfortunately the code is dense, surprisingly large and has had some scary bugs.
Further research pointed towards IMGUIs – conceptually very simple, easy to use, straightforward to integrate. Great! Most of the ones I found stumble in terms of text rendering (important for UIs) and/or were not portable or easy to integrate into an SDL/OpenGL app. The best one was Nvidia’s now discontinued imgui from their SDK.
I started with that and completely rewrote it with proper text rendering, much simpler and cleaner code, and support for vertex arrays so it will hopefully also run on GL ES 2.
Here’s a screenshot showing various controls (checkbox, slider, list box, label, combo box, panel, radio button, pushbutton, line edit):
The best part: it’s all contained within a 340 KB executable with no external dependencies (apart from the usual kernel/user/gdi/opengl). The whole thing is only ~9 KLOC and builds in 2 seconds. This is the kind of simplicity I like – such a joy to develop 😀
Let’s share in the fun. The headers are available here: ui [6 KB]
I’d love to hear your thoughts on the interface, whether anything important is missing, and welcome any discussion here or via email.
I am grateful that mosquitoes are not much of a problem in downtown Singapore – perhaps they cannot fly high enough. Their vertical flight range is reported to exceed 21 stories above ground [http://www.nea.gov.sg/cms/sei/ehi1slides.pdf], but I have not seen any at 34 or above.
When traveling, though, it’s a different story! Despite repellent, we end up with all these itchy bumps. Perhaps we could view them as opportunities for mindfulness, but it would be much easier if they would just go away. Is that actually possible?
Long ago, I learned (through a slip of the soldering iron) that heat seems to make the bites disappear as if by magic. To understand why, let’s take a step back.
Blood from injuries generally clots, so the mosquito’s `needle’ would soon become clogged. To prevent that, they inject various proteins with several nasty effects:
– suppressing T-, B-cell and cytokines (immune response)
– interfering with platelets (blood clotting)
– even increasing mortality from West Nile Virus
Yikes! The itching aside, it sure seems useful to undo these effects. We know from basic cooking that (e.g. fish) protein denatures at fairly low temperatures.
Indeed, someone recently asked on reddit [sic] “Can putting a hot spoon on a mesquito bite denature the protien to lessen the allergic reaction”? The replies there seem unnecessarily negative:
“immunoglobulins .. start to denature at around 60C”
Perhaps the ill-posed question led the responder astray, but we are not interested in denaturing the IgG. It suffices to attack the mosquito saliva itself, not our desirable immune reaction to them.
“it would take between less than one to 25 hours (depending on the temperature) to fully denature the antibodies”
In addition to focusing on the wrong protein, this view is overly pessimistic because we do not need to fully and irreversibly denature the protein. Perhaps it is enough to unfold them, which happens more quickly and at lower temperatures.
“alboserpin (the anticoagulant in mosquito saliva that our bodies react to).. are only sensitive to denaturation at temperatures above 60C”
Maybe so, but it actually comprises only 1% of the proteins in mosquito saliva. [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3151045]
“Its remarkable the disinformation regarding this out there; if you think critically about heat-mediated mosquito protein denaturation being the mechanism for itch reduction, it just doesn’t make any sense.”
Sounds like ill-informed and closed-minded dismissal of a valid observation. It is more understandable if coming from a school that claims the only salvation lies in anti-histamines and other drugs. “It is hard to fill a cup which is already full.”
Let’s turn this around and look from the perspective of researchers who want to store mosquito saliva. Potency roughly halves when storing at 21C vs 4C. [http://www.parasitesandvectors.com/content/4/1/33] Sounds potentially useful!
Also, West Nile Virus “is thermolabile and .. inactivated rapidly by heat”. “At 28C the titer of the virus decreased by a factor of 10^3”. [http://www.karger.com/Article/FullText/353698] Very doable. These low temperature thresholds indicate we can measurably reduce mosquito effects at temperatures far short of really hurting our skin. Bring on the heat!
Apparently there is an FDA-approved gadget for the purpose: [http://gizmodo.com/5935350/therapik-bug-bite-relieving-gadget-review-we-cant-believe-this-actually-works] I’ve never tried it, the underside of a mug containing boiled water works well enough. Around 30 seconds seem to do the trick. I’d estimate that temperatures around 45-50C are required.
Please don’t go and get third-degree burns, but some pain is involved.
For completeness, let’s mention the theory that heat merely (temporarily) overloads the nerve that signals itching. That is still possible, but I offer anecdotal evidence that heat-treated mosquito bumps entirely disappear within a few hours, which is not the usual experience.
I hope this helps!
Friends have asked me about genetic engineering. After some research, I feel a mix of shock, horror, and rage. Vast hubris, primitive technology, blind faith, incalculable risks, deliberate lies, broken promises, billions of losses, thousands of deaths.
Why does all this happen? Simple: greed. Pesticide pushers gain market share if crops are engineered to mostly survive that particular poison. Seed providers (often the same company) rake in the cash when seeds self-destruct and must be purchased anew every year.
Meanwhile, we are taken for fools, exposed to known and unknown health risks, and sanctimoniously told that this is for the benefit of the hungry poor, who suffer the most from it.
I’d like to expand this into a larger article, but for now we will understand what’s going on, expose the lies, and see how to avoid the worst of the trouble.
Anything worth having requires effort and awareness. There is simply no alternative; luck is not a strategy, and sticking our heads in the sand won’t solve any problems. Onwards!
They sow the wind
In a nutshell, genetic engineering seeks to produce certain proteins, which are assembled according to DNA blueprints. The desired DNA gene is mass-produced by placing it inside bacterial DNA. Target plant cells are also grown in an artificial medium. The new genes are transferred by letting different bacteria infect the plant cells and insert their DNA, or by attaching DNA to metal particles and hoping they are spliced into plant DNA.
Because this process is highly unpredictable, an antidote is inserted alongside the other DNA payload and the cells are exposed to the corresponding poison. The few that survive probably got most of the intended DNA. These cells are allowed to grow into plants, though many do not survive.
Afterwards, basic weighings may be performed, alongside `studies’ in which a few animals are fed something similar for several days. Actually, those are optional, because the US regulatory agency allows seed companies to self-certify their products as safe for any use, which of course they usually do. What could possibly go wrong?
.. and reap the storm
As it turns out, lots. The proteins being produced are typically intended to kill insects, or resist pesticides dumped on the plant. Who can say they don’t also harm us? A few half-hearted and laughably small animal feeding experiments and trials were undertaken in the hope of finding nothing, and even these were enough to uncover a long list of problems: bleeding stomachs in rats, intestinal damage in mice, allergies in humans, deaths of cows.
Roots of the disaster
How can this happen? Crytoxins are a common class of target proteins. Their name derives from their crystal shape, which kills insects by quite literally poking holes in their guts. When companies bother to investigate whether we are similarly damaged, they claim the toxins are made harmless during digestion. Unfortunately, we have 1/1000th the amount of digestive enzymes, and nowhere near the acidity required to even partially break them down.
Those are just the known and intended proteins. Recall the gene insertion process is completely unpredictable: the payload may be inserted anywhere within the plant DNA, sometimes reversed, with parts of the participating bacterial DNA thrown in. The original DNA is also damaged by insertion and mutations from the growth medium. This means all sorts of unintended proteins can be, and are, produced. Afraid of finding something they don’t want to know, companies usually don’t sequence the genome to see what came out, and actively refuse to give researchers even tiny samples.
Even if the DNA arrives mostly intact, it might land in an unused region. Genes are forcibly switched on by so-called promoters from a virus. In addition to uncontrolled and permanent mass-production of the target toxin, this can also activate other dormant genes (including ancient viruses slumbering within our DNA). At the opposite end, a terminator is intended to halt copying from the blueprint, but it often does not work, which means all kinds of new and unknown proteins are produced.
This does not hinder the headless scramble to market, because the few tests are usually run with the protein that was _intended_ to be produced. Unfortunately, the blueprints came from bacteria, but are being built by plants. Proteins produced there can be folded differently or come with sugar molecules attached, which changes the way they interact. Those tests therefore cannot guarantee the safety of the actual genetically modified organisms.
To complete the picture of completely unpredictable chaos, the damaged and unstable DNA may mutate further, or be acted upon differently when growing conditions change. In short, we have absolutely no idea what is going on, beyond the near certainty that the resulting plant differs in terms of nutrients and toxins. Do you feel lucky today?
How could this happen? Surely the government has an interest in ensuring public health? Apparently not enough. Perhaps they fear driving away the big companies and their taxable profits. Maybe they actually believed the demonstrably false tales of higher yields and lower pesticide use. There is another simple explanation: the revolving door between business and the government. A former GMO developer switched sides and became responsible for regulating her own product; conversely, regulators might remain silent to avoid endangering lucrative subsequent jobs within the industry.
Such corruption seems more common in the US, but the German government also allowed the reapproval of MON 810 corn despite public protests. These have had some success, prompting Monsanto to quietly pause their European marketing efforts, except in Spain, Portugal and Romania.
However, storm clouds gather. The TTIP being negotiated behind closed doors may allow the Americans to force their untested and dangerous GMOs on the EU. The US position is: “What’s good enough for American families to eat is also good for Europeans to eat”. That would be a disaster; although the EU delegation seems unprepared and inadequate, hopefully at least the German officials will remember the oath they swore “to avert harm to their people”.
Fixing it ourselves
Although governments have been suborned and co-opted, we are not defenseless. Ultimately, our well-being is our very personal responsibility; there are always options. The first thing to do is seek out organic food. Most restaurants and processed food sources do not, so it pays to cook at home (a delightful social activity).
In the US, the GMO cancer has spread so far and wide that even organic crops are contaminated with GMO through cross-pollination. The sad requirement there is to avoid soy, corn, canola, cotton, papaya and be careful with zucchini and squash. Interestingly, the top two GMO crops, soy and corn, are also among the top seven causes of food allergies – perhaps not a coincidence.
Canola is used to make vegetable oil, another important reason to reject any that don’t explicitly mention their source. Thankfully, EU organic rules forbid feeding animals GMO feed, but elsewhere they are given the cheapest and worst, and some toxins pass into milk and meat. We should seek out grass-fed beef (which has other health benefits), and eat less of it.
If this appears expensive or troublesome, how much would it cost to mitigate a food allergy, repair intestinal damage, or undo changes to the DNA of our gut flora? The cost and effort of choosing organic ingredients is far more manageable, and cooking with fresh (or fresh-frozen) pesticide-free ingredients is a delight. The results may be a pleasant surprise – who knew vegetables could taste so good?
People often ask why I don’t eat certain things. I jokingly mention “religion” because that’s easier than throwing around 6-syllable words. Here is a brief explanation, without the usual references (as befitting a religion).
Why does this matter? We are, quite literally, what we eat. Our cells are built from – and run on – components derived from the food we eat. When eating high-quality food, I feel happier, more alert and fit. That requires avoiding or at least reducing four problematic foods, in decreasing order of importance:
Gluten (a protein in wheat/barley/rye)
- Intestinal permeability (via zonulin).
- Inflammation and brain damage (from immune over-reaction).
- Higher glycemic index than sugar (due to branchy amylopectin).
- Addictive (acts as an opiate).
- Toxic sodium azide (used to induce gene mutations).
Although grains lack nutritional value, I love baked goods (cakes and muffins) and use almond flour, buckwheat and coconut flour. Other grains are acceptable but harder to find: amaranth, quinoa, sorghum and teff, or arrowroot/tapioca/corn (but beware their high glycemic index). Note that whole grains do not solve the above problems. A mill is useful for grinding flour from gluten-free (and non-oily) grains.
Vegetable oil (sunflower, corn, soy, canola, peanut, cottonseed, grapeseed, margarine)
- Chemical solvents (petroleum/hexane).
- Unstable polyunsaturated fatty acids.
- Bleaching, deodorizing to mask rancidity.
- BHA and BHT preservatives (carcinogens).
- Inflammation and cell mutations.
- Strong link to cancer and heart disease.
- Omega-6 imbalance (interferes with DHA conversion).
- Pesticide residues.
- Genetic modifications.
Cheap vegetable oils are in just about everything we can buy in stores or eat in most restaurants. This alone is a powerful reason to do our own cooking; avoiding these oils is a huge win. Safe and beneficial alternatives: ghee or non-UHT cream from grass-fed cows, coconut oil, extra-virgin olive oil, avocado oil, palm oil, non-hydrogenated lard rendered from grass-fed pig fat. Note that Ghee is ‘clarified’ butter without the often problematic milk proteins, and can easily be made at home.
Simple carbohydrates (sugar, juice, bread/rice/pasta)
- Blood sugar dysregulation (insulin and glucagon roller coaster).
- Adrenal burnout and pancreas exhaustion (diabetes).
- Painful gout from uric acid (by-product of fructose breakdown).
- Loss of tooth/bone calcium (to neutralize acidity).
- Protein damage via glycation.
- Imbalance of gut bacteria, possibility of candida overgrowth.
- Decreased dopamine sensitivity (indicates addiction).
- Strong link to Alzheimer’s and atherosclerosis (heart disease).
It is best to reduce our hunger for sweets, but there are somewhat better alternatives. Date sugar, molasses, honey and maple syrup provide at least some nutrients. Stevia, erythritol and xylitol are the only acceptable non-sugar sweeteners. Note that citrus fruits and berries are fine, because their fiber content blunts the sugar rush. Juicing obscures the quantities and breaks apart the fiber. We should also avoid high-fructose fruits: mango, grape, watermelon, pineapple, banana, and apple.
Soy (most soy sauce, tofu, soy milk, edamame)
- Phytoestrogens (equivalent to multiple birth control pills per day).
- Phytic acid (reduces bioavailability of minerals).
- Goitrogens (suppresses thyroid function by interfering with iodine metabolism).
- Lower quantity and quality of sperm.
- Decreased testosterone and fertility.
- Increased calcium and Vitamin B12 and D requirements.
- Link to Alzheimer’s, dementia, ADHD and breast/prostate cancer.
Soy was traditionally only eaten after fermentation, which reduces the above problems. Nowadays it is a waste product of soy oil production and cheap filler found in almost all packaged and fast foods; yet another reason to avoid them. “Naturally brewed” soy sauce and natto are acceptable.
Enough food for thought? I understand that this is a lot to swallow. For today, we’ll limit ourselves to these top four, though there are further common food sensitivities and additives to discuss.
Making these changes will drastically reduce risk of cancer and heart disease. Expect increased energy levels and mental clarity after a week. “Come to the dark side – we have cookies” – fine and good, or we can instead make cookies that do not destroy our health and happiness, nor drag down our vitality. Is the additional awareness and effort worthwhile? For me personally, having experienced both, the answer is yes.
Interestingly, these were all non-issues until the disastrous drive towards industrial, high profit margin agribusiness. I wish our food sources were trustworthy, but we are instead forced to choose between convenient, low-cost (or rather: the true costs have been externalized/hidden) toxic sludge, or more mindfulness and quality. What will it be?
Summary: We’re shooting ourselves in the foot with sleep debt and accumulated bugs
Action needed: avoid all-nighters, get 8 hours of sleep, fix bugs sooner
Up, periscope! The time before a trade show is of course a busy one. Let’s look at the topic of productivity. There is a great article on why “crunch mode” (working longer hours) doesn’t work. Although written from the experience and perspective of a software manager, its lessons can be applied to just about any salaried job. I recommend reading it in full; of its excellent points, we will cover two (sleep and bugs) in more detail.
- Total (!) sustained work output is highest at 40 hours / week, as found by numerous industry-conducted studies;
- Sleep matters, even one additional hour;
- Longer work times can only yield short-term gains for a few weeks;
- Implementation errors can cause NEGATIVE productivity. 
In my opinion, the root cause is that us humans get tired: “fatigue is arguably one of the most persistent threats to mission success during sustained or continuous operations” . The armed forces have therefore conducted studies on the extent of the problem. Ever wondered how drone aircraft are operated? Shockingly, “40% of the study sample reported a moderate to high likelihood of falling asleep [..] while operating a weaponized, remotely piloted aircraft” 
In a systematic study of prolonged sleep deprivation, measured performance relating to reaction time, logical thought and vigilance drops by 20-30% within 18 hours and 50-60% within 42.  Similar results are observed for self-reported fatigue, negative mood and sleepiness. 
The only conclusion that can be drawn is that all-nighters are silly, self-defeating mistakes, yet they persist in the form of hackathons.
Let’s move on to the more common case of chronic sleep restriction, in which we get too little sleep per night over long durations. In a more recent study, groups for which sleep was restricted to 3, 5 or 7 hours/night saw increasing reaction times, the more so the less they slept. Those allowed to sleep 9 hours experienced no such decline.  Even after only a week of 7-hour nights plus 3 recovery nights, performance remained 10% lower than the 9 hour control group.  Unfortunately, only the 3 and 5 hour groups reported higher subjective sleepiness values. 
My takeaway is that 7 hours of sleep are not enough for most people, that there are measurable consequences, and that we may not be able to notice them.
Bugs, bugs, bugs
The second point concerns bugs – design/implementation mistakes will always be made (though more frequently when tired, per the above). How we deal with them makes a big difference.
Microsoft apparently learned this with the first version of Word for Windows, as reported by Joel Spolsky.  Programmers were working very long hours (see above), but throughout it all the managers stuck to an unrealistic schedule. This encouraged rushing out incomplete and shoddy code, knowing that testers would find, puzzle over and report the missing behavior. However, the problems were already known and would have to be fixed before shipping anyway, so all this did was waste everyone’s time. Worse, the schedule devolved from a reliable and accurate instrument to a “checklist of features waiting to be turned into bugs”. After major stress and a much-delayed release, Microsoft did some serious soul-searching and realized this “infinite defects methodology” is unworkable. Instead, priority would be placed on fixing known bugs before writing any new code. This has several advantages:
- fixing bugs is faster/cheaper when the logic is still fresh in mind;
- the schedule becomes much more predictable (bug-fixing time varies, and shouldn’t be carried forward as a burden on future schedule items);
- the product is always nearly ready to ship and can easily react to external circumstances.
Sounds pretty compelling. By contrast, a brief look at the horrible internal Word data structures lends support to the theory that the infinite bug methodology does not produce good results. That mistake has echoed throughout three decades. Let’s do better, by planning time into the schedule for bug-fixing and doing things right the first time.