Science in policy, an engineer’s take

I went to a talk on Thursday evening hosted by CSaP. It was Mark Henderson talking about his new book, The Geek Manifesto. The talk was interesting and Mark had lots of good points to make about science in government policy (specifically, the lack of science in policy). His basic points boiled down to:

  1. Science should be better at promoting itself in government policy.
  2. Policy creation should be more scientific in its process.

At the end of the talk I tried to ask a question picking up on something that was raised during the discussion. Mark made the point that science should just be one of the factors that is invoked during policy making, the others being such things as voter wishes, human rights, ethics etc.

In the long and rambling question, I tried and failed to articulate the point that science should be the overarching framework for policy, and that these other factors are just parameters within that framework. Science should not be subservient to other points, but these other points need to be made to fit within the scientific policy framework. The question was accompanied by lots of shaking heads and a response that reiterated Mark’s original point. This got me thinking about how better explain what I am talking about.

The basic point is that to talk about science as being one of several factors in decision making is rather missing the point about science. It is the only reliable way we have of building knowledge about the world. For this reason, it makes perfect sense that any mechanism that attempts to define the world needs to do some from an exclusively scientific perspective. Any attempts to do otherwise are misguided at best and dishonest at worst. Science makes no claims about the nature of the knowledge it discovers, and it makes no claim about the tractability of the discovery process.

The core of it though, and I think the point where I got lost initially, is that policy making isn’t really the acquisition of knowledge (and hence science) at all; at its heart, it is engineering. What we have is a massively multivariate optimisation problem with some poorly defined cost function. It is this cost function that needs to be debated, and it encapsulates all those issues that were argued need to be considered in parallel to science.

The dimensions of the optimisation problem correspond to policy parameters – laws, taxation, incentives etc. The cost function then reflects how those laws are translated into real world consequences, giving a metric of “goodness”. This means that ethics, maintenance of democracy, not locking everyone up etc are necessarily encapsulated in the cost function. I would argue this cost function is something akin to a utilitarian type total happiness metric.

Of course, such a cost function is something that is inordinately difficult to both define and measure in the most general sense, but I expect the problem can be considered as a whole set of subdomains with reasonable separation – e.g. the economy, social welfare, foreign policy etc

Science then does have something to say about measuring the cost function. A whole discipline will arise around honestly quantifying the impact of a given policy change. Done properly, slowly but surely a body of knowledge will develop around how certain outputs can be achieved through policy change, as well as a body of skills and knowledge about how to measure and trial policy changes.

Any attempt to say, trample on human rights, is prevented by a huge negative impact to the cost function.

Of course, the real problem, and the essence I think of the political problem that underpins a lack of scientific scrutiny in policy making, is that there is a strong political will to not ever define the cost function carefully. Never properly defining what you’re trying to achieve is one sure way of stopping people telling you you haven’t achieved it.

This brings me to my final point – wouldn’t it be wonderful if politics became no longer about defining policy, but about defining that cost function. Civil servants could then go away and optimise policy to improve the cost function, drawing in new research and optimisation techniques as they become available.

I discussed this at length with my house mate and he made the point that the only cost function that matters is one’s own personal cost function. Whether politicians and civil servants can escape this trap would dictate whether such a utopia is possible.

Posted in Engineering, Life | 5 Comments

The Wisdom of FFTW

Since the last post on my python wrappers for FFTW, I’ve advanced the code substantially.

It now supports all the basic transforms using the same unified pythonic interface in which the transform is inferred from the dtype. In addition, I’ve added support for importing and exporting wisdom. Wisdom is FFTW’s way of remembering plans that have already been created, thereby speeding up the planning process in future. In particular, the slow planning routines like FFTW_MEASURE will benefit at the first running it the wisdom can be loaded from disk.

The wisdom routines don’t actually write to disk at present, this is because the nice api feature of FFTW that makes this trivial wasn’t added until FFTW 3.3 which is not widely distributed yet. I’ve written the code for this, but commented it out at present. The wisdom is exported as a tuple of strings, which can be pickled and saved as necessary. I suppose the strings could be saved too, but I’ve not tried this. There may be some problems with unicode conversions (which the output from FFTW is not), but I’m happy to be proven wrong on this.

My next goal is to implement the numpy fft interface, making pyfftw a drop in replacement for numpy.fft. The one small problem I’ve encountered so far is that numpy.fft will happily run over repeated axes, which FFTW doesn’t seem to like (at least, using my wrappers). I may just ignore this limitation – who is likely to use it anyway? (serious question!)

As usual, code on github, release on python package index, and docs here.

Posted in Engineering, Programming | 9 Comments

The joys of Cython, Numpy, and a nice FFTW api

This is about my new FFTW python wrapper.

The FFT routines shipped with Numpy are rather slow and have been the performance bottleneck in my code for some time. Last week I decided I needed to move to FFTW for some of the new code I was writing, at least for the prototyping stage  РFFTW is GPL, which limits its use when it comes to distribution (though it is possible to buy a license, and apparently the Intel Math Kernel Library uses the FFTW api, which means the code is more widely useful).

I looked at an existing set of python wrappers, but didn’t really like the interface. The issues I had with it were as follows:

  1. It carried over the requirement of FFTW that a different library is used for each data type, so a different interface was used for complex64, complex128 and complex256.
  2. It cannot handle arbitrary striding of arrays. This rather breaks the wonderful way in which Numpy can handle views into memory, in which sub-arrays can be created which look and work like a normal array, but the dimensions are not contiguous in memory.
  3. There didn’t seem to be a way to choose arbitrary axes over which to take the DFT. Numpy’s fftn handles this with an axes argument (which is just a list of axes).

Anyway, the upshot of my difficulties was that I decided to write my own set of wrappers. It also gave me a good little project for working with Cython, which I needed to know about for some other things.

I had the core of what I needed written in a day, solving the second two of the issues above. This was in no small part down to just how fantastically nice Numpy is, as well as the neat fit is has to the guru interface to FFTW. I can only assume that the other wrapper writers didn’t look in too much detail at that interface. Basically, there is a clear and simple translation to be made between the strides attribute of a Numpy array, and the arguments to the FFTW guru planner. I actually got too confused by the ‘lesser’ advanced interface to do anything useful with it. I think the FFTW people are doing themselves a disservice by calling it the ‘guru’ interface – it sounds hard then!

A bit more work later and I had the unified interface for all the complex Numpy types supported by FFTW (which happens to be all those supported by Numpy on my platform), as well as a pretty comprehensive test suite and documentation. So far, I only have the complex DFT enabled, but the code should be sufficiently flexible to extend easily to the real DFT and other FFTW routines.

The docs are here, the code is here, and the python package index page is here.

One final point… Cython is just wonderful. One can write in python, and C (ish) and python again all in the same file, and then compile it, and end up with a proper python module, all with fantastic distutils support. That is how extensions should be written. Why is anyone still using Matlab?

Edit: In case anyone was wondering, my crude benchmark puts FFTW about 5 times faster than the numpy fft functions.

Another edit: The wrappers are now somewhat more capable, supporting real transforms and multi-threaded mode.

Posted in Engineering, Programming | Tagged , , , , , | 45 Comments

How to peel a beetroot

After you’ve roasted it in foil for an hour, stick a fork in one end, hold it vertically up, and scrape it downwards with a teaspoon.

Posted in Engineering, Food, Life | Leave a comment

A Public Service Announcement on the Matter of the Tying of One’s Shoelaces

Earlier this week, my father was lamenting the fact that his boot laces keep coming undone. This is apparently a particular problem when the boot laces in question are under a pair of gaiters, but I personally think it must be  mighty annoying at any time.

It turns out, after a swift inspection, that the man has been tying his laces incorrectly for the better part of 60 years. It reminded me of a similar situation during lab coffee in which a former colleague was expressing frustration over his new shoes (ostensibly the same as an old pair) with laces that came undone.

I here pass on that same information, with which I enlightened my dear father, my colleague and about half the others that happened to be present during that particular coffee break that were failing to tie their laces correctly, as a late Christmas present to any readers that may be about:

The loops of your laces in the tied knot should be parallel with the lace as it enters the knot (or balanced), and not at an angle (or unbalanced).

I came across this information at the marvellously geeky Ian’s Shoelace Site. The particular page in question is on slipping shoelaces, which will give you all the details and the photos describing what the problem is and how to get around it.

For all those that previously tied unbalanced shoelace knots, bask in your new found delight of shoelaces that don’t come undone.

Posted in Life | 2 Comments

The Innovation Agency site launches…

The Innovation Agency website has now launched officially. Be delighted by its greenness and its wordiness (and also by its content!).

That is all.

Posted in KED, Life, The Innovation Agency | Leave a comment

What Push and Why Pull

There is an oft presented dogma in business that it’s a Bad Thing when a venture is “technology push” rather than “market pull”. The rationale behind this is that you should understand the market before you attempt to solve the problem with a given piece of technology. If a piece of technology is driven by a clear market opportunity, then that is “market pull” and so is a much better way to proceed than if you start with a piece of technology and attempt to find a market for it. As a general idea, it makes perfect sense.

I’ve always been uncomfortable, however, when the dichotomy is taken to its logical conclusion. Clearly some technologies, which are developed first and foremost as a technology, have excellent market opportunities, once they have been discovered or developed (think almost any disruptive technology). It’s hard to argue that these weren’t technology push, and that investigating the market wasn’t a sensible thing to do in the situation, so how would typical business thinking rationalise this? Probably with some woolly discussion of how the entrepreneur possessed some market insight that drove the technology, giving it an implicit market pull – certainly that’s how I’ve been rationalising it to myself.

I had my big insight during a discussion I had a couple of days ago with a co-partner in The Innovation Agency, Ian, in which we were planning a seminar we are running in several departments in the University of Cambridge. We are planning to structure the seminars around the idea of the Golden Circle, introduced by Simon Sinek during a TED talk:

It’s an excellent talk and well worth the time to watch it, but, in a nutshell, the idea of the Golden Circle is that we can consider a venture diagrammatically as a series of concentric circles with a “why” at its core, surrounded by a “how” and concluding at the edge with a “what”. The thing that really innovative businesses, organisations and individuals have in common is that they consider their activities from the inside to the outside of the circle. That is, they firstly consider why they are doing what that do, then the consider how they are going to do it, only then do they think about what it is they are going to do. Obviously, that’s somewhat of a simplification, but the idea is that why should be at the core of everything that is done. The why is the philosophy that drives everything and is the emotion behind the how and the what.

Conversely, many companies and organisations work the exact opposite way, thinking firstly of what they are doing, then perhaps how they are doing it, and, chances are, never arriving at the why. This means they can never fully engage with their customers on the decision making emotional level.

It was from this that my mental light bulb lit up regarding “technology push” and “market pull” ventures. The point is, is that distinction is the wrong way to look at it. A much better way to think about it is as “what push” and “why pull”. This means that an opportunity might be technology push but be driven by an overarching why.

This gives an interesting perspective on my current field of interest of commercialisation of academic research. Most scientific and technical academic research is strongly driven by a why – some real world question or problem that needs solving without any interest as to the how or what. Indeed, many an academic has failed by getting too hung up on the wrong what. It follows, therefore, that academic research and technology carries with it an implicit why. If that why can be paired with positive answers to the other whys of “why are you doing this?” and “why might this not work?” (which are always necessary, technology push or market pull), then it strikes me that the opportunity should be there.

The thing I take from this is that every piece of academic research should be investigated for its real world impact. That’s not to say that everything should be commercially exploited, but the why behind it should be pushed as far as it can go. For example, in the case of climate research, its important that as a society we understand and act upon the implications of the outcomes of the research, whatever they may be, because that satisfies the implicit why. Further to this, every academic funded through public money has a societal obligation to see this why pushed to its logical conclusion, so that we all may benefit.

The other interesting thing to think about is whether there are any market pull opportunities that aren’t why pull? I can’t immediately think of any, but there must be some…

Posted in Business, KED, The Innovation Agency | Leave a comment