Friday, April 27, 2018

Quantum Computing and Quants and the Turing Institute's mission

so people are afraid of quantum computing.
people should be far more afraid of today's algorithmic trading systems.
the world is run by a bunch of computer programs which have never been scrutinized - they might just each be nonsense, but in combination, even if they are all very correct, they certainly create nonsense. The fiction that the world is running on some financial fantasy academic structure known as a market is a wonder to behold - essentially, the sustained collective delusion in the face of obvious massive fractural objections (american exceptionalism:  why is the dollar worth what it is, aside from military might? protected markets inside China; split level economies like Brazil and so on) - all this is like the medieval world where the SMith would charge one price to shoe the farmer's horse, and another (massively higher) price to shoe the Lord of the Manor's horse - why? and why didn't anyone run arbitrage on this? because society and people don't mix with the idea of money really at all well.

So a lot of work in machine learning and AI and finance purports to address problems like money laundering and fraud and so forth. And yet we live in a world where the whole operation of existing algorithms is based on a false belief, that they operate in a market. Many of them operate in the casino that is the stock market, which is even more divorced from reality than the rest of the economy. Here algorithms are in an arms race, and yet it is an odd arms race, as unlike warfare i nconventional battlefields where we can pick apart the guns and planes of the other side from time to time and take their ideas, or at least reverse engineer them, the algorithmic race in the stock market is too complex and too fast to allow this - code just interacts via the symptomatic (observed) behaviour of the system, and rarely if ever directly interacts with other code. Most weird.

SO the real first duty of ML&AI in the world of finance should be to expose and fix these structural problems - first, to model the worlds economy properly (e.g at least as well as people like Piketty, but more so, dynamically), and to build a sound system for running investments without casinos like the stock market, but also without fantasies like Adam Smith's invisible hand.

So what has this to do with Quantum Computers[1]?

Well, QC promises to run some new algorithms in a new way - there aren't any signs of a full working QC piece of hardware yet, and there are precious few actual algorithms[2] so far, but one in particular has grabbed a lot of attention, which is the possibility to factorize numbers super fast compared with good old fashioned von-Neumann computers (faster even parallel and distributed vN machines) - this is due to the qualitatively different way that QC hardware works (highly parallel exploration of state spaces, scaling exponentially with the number of Qbits).

So what does this threaten?

Well, it threatens to break cryptograpy, which means our privacy technologies for storing and transmitting (and possibly even securely processing[3]) data are at risk. Bad guys (or just curious people) will be able to see our secrets.

So two thoughts
A) why can't we just devise new QC encryption algorithms, which just moves the arms race along in the usual way (a million bit keys for example, or something really new we havnt thought of yet)? Then we are back to the same old normal world where most data breaches will be because of social engineering or stupidity and self inflicted (minister leaves unencrypted USB stick on the bus) wounds.
B) Maybe we get more cautious as a whole and just don't send stuff around so glibly or provide remote access to our computers so readily. Maybe access control and authentication and just implementing least privlege properly could work most of the time, and the whole idea of crypto saving us was a chimera and a delusion, just like the whole idea of the market was a snare and a delusion?

my 2 q-cents.

j.
1. not to be confused with quantum communication where we use entanglement just to detect eavesdroppers - a perfectly sound, existing tech with a very narrow (point to point, not end-to-end) domain of usefulness.
2. shor's algo for example. one puzzling thing is that we had hundreds of algorithms for von-neumann style computers before we had any working hardware. why is it so hard to conceive of algorithms for QC? seems like it is a poor match for how humans are able to express methods for solving problems (which are many, and varied, but don't seem to fit ensemble/state space exploration, except perhaps in MCMC:-)
3. eg.. homomorphic crypto - possibly also at risk from QC, although re-applying ideas like garbled circuits to a QC machine shouldn't be too hard:-)

Tuesday, March 27, 2018

How science progresses - falsifiable, probably or paradigm shift, likely?

Reading Staley's excellent introduction to the philosophy of science was reminded of reading Popper's Objective knowledge back in the 1970s, but now I'm a recovering Bayesian, and am immersed in social science explanations like the structure of scientific revolutions by Kuhn, or even the whole idea of funding/groupthink/paradigms, I'm now convinced we don't have a good basis for choosing the right description of the process (or classifying best practice) until we study the past, both its pre- and post- states - i'm thinking that people choose to run occam/popper after they intuit a new paradigm shift (e.g. copernican model of planets) and use some confidence models to decide that, when the new theory has objectors, the objectors are outliers, whereas the old new outliers the new theory explains were more important than the new old outliers - of course, the new theory can still be wrong, but the smart money is that it isn't...

Tuesday, February 06, 2018

what if you were the only real person in the world

Anyone read Theodore Sturgeon's fabulous short story It Wasn't Syzygy?

Trying to wean people off facebook by creating an alternative (e.g. advert free, subscription, but open to link to other platforms) system, so everyone always starts by saying "you can't beat the network effect". so at what scale does this network effect magically become unbeatable? for example, the web has beaten TV even though TV had a billion users. Metcalfe claimed the value was n-squared, others have toned that down tho n log(n), but i think it's ignoring the _negative_ contribution level from spam/phish/troll/advert/attention grabbing, which inevitably grows with the network, but usually, over time, faster in the end. so here's my proposal anyhow: we invite you to our new net which has "everyone" in your network on it, but initially, your friends are al just bots emulating your real friends they make you feel at home there. now you tell your real friends about your safe new, ad free social net, and as they join, they replace the avatar/bot of them (a bit like the opposite of the stepford wives). oh, did I forget to tell you. we already did it. No, really, We didn't have to do it, they are doing it to themselves - c.f. Dr Wu's fine book on the attention merchants...

+
 so in fact we can model this from the fact that the network is directional, and end points (humans) take more time to create new content than to consume (new to them) content - so even if we aren't all couch potatoes, this asymmetry in creativity versus consumption means that the network will tip from peer-to-peer, to being dominated by a small number of producers and a large number of consumers - the cost of creation will drive the quality of creation down, but the quantity up (to keep it new - well known to pornographers for example - c.f. https://yourbrainonporn.com/)

Tuesday, October 31, 2017

declining big data, colour me rimbaud

first you beg for data
then you bag the data
then you de-bug the data
then you get bog-ged down in the data
finally, you big up the data

so what's new in this , i beseech you from the depth of my vowels?

Monday, June 26, 2017

appear to peer - ideas for glastonbury from 2017

so standing in the middle of a very large field surreounded by 200,000 people, but within about 100 peoples' handshakes of a bar, why not build a massive p2p version of uber for beer? you register and then people literally pass beer across to you and you pass money back.....you'd need a trust/reputation system - there'd be some spillage....but that's true anyway (I got wrong change at least 3 times at the bar the traditional way)

the world's first firechat-style beer-to-beer network.....


could also work for snack deliveries...and recycling

meanwhile, in the traditional Real Life, observing someone walk from the Village Pub to the center of the crowd in front of the Pyramid (watching The National, if you want to know) carrying 2 pints + 2 plates of fine ethnic stacked high food, narrowly avoiding many scurrying people, we are a Very Long Way Away Indeed from self driving AI robots navigating a space this complex & dynamic.

if you care  about music, what was good? most stuff, like Thundercat, Joseph, the Lemon Twigs, and some oldies like Barry Gibb and Chic, and a blistering opening set from the Pretenders, with la Hynde in excellent voice. Radiohead? Nah, a bit meh, really. Kris Kristofferson (81) charming, but frail. The aforesaid National? Very Good Indeed. Beyond all possible descriptions? Father John Misty and London Grammar - both of them made time. stand. still.  loads of good comedy, politics, amusing high wire acts & lessons. and a very very chill mood (helped by fairly fine weather almost the entire time!)

Friday, April 28, 2017

unfairness in automated decision making in society.

reading this book about mis-use of maths/stats recently, i think we can go further in condemning the inappropriate approach taken in some justice systems to decide whether a guilty person receives a custodial sentence or not.

The purpose of locking someone up (and other stronger sentences) is complex - it can be to act as a disincentive to others; it can be to protect the public from that person re-offending; it could be a form of societal revenge; and it might (rarely) be an opportunity to re-habilitate the offender.

So we have a Bayesian belief system in action, and we have a feedback loop.  But we better be really careful about i) the sample of inputs to the system and ii) the sample of outputs....and not forget these are humans, and capable of relatively complex and highly adaptive behaviours.

So what could be wrong with the input? (sigh, where to start) -
people who commit crimes are drawn from a subset of society, but people who are caught are drawn from a biased subset - firstly, they're probably less well educated, or dumber, or both, because they get caught. secondly, they're probably from a socially disadvantaged group (racial minority).
people who are found guilty are also the subject of selection bias (and people who get away with it, are party to survivor bias too) - juries have re-enforced the bias in the chance they are caught.

people who are sentenced acquire new criminal skills - this may make them less likely to get caught if they are just poor, but more likely if they are dumb.

So in there' I count at least 4 ways that a decision system that looked at re-offending rates, and properties of the person found guilty, would be building in positive feedback that will lead to more and more people being incarcerated, with less and less justification.

occasionally, external changes (accidental natural experiments) perturb the system and make this more obvious - in the film documentary, the House I live in , the absurd war on drugs is shown to be massively counter-effective - near the end, the huge bias that this has set against african americans starts to wane, simply because of the move in the poor white working class of america into making and consumption of crystal meth (so brilliantly portrayed in Breaking Bad - suddenly, the odds stacked against on group, multiplied by re-enforced prejudice 3 or 4 times over (indeed, one more time for the 3 strikes rule), hit lots of "trailer trash"....

An interesting research task would be to run a model inference tool on the data and see how many latent causes of bias we can find - maybe my 3,4 or 5 is not enough.

truly the world is broken, when it comes to evidence based decision making!

Saturday, March 25, 2017

of the internet, for the internet, by the internet

what have we wrought?

i don't think it is about the echo chamber, bubble, or
faddish claims about fake news and alternative facts.

nor do i accept that  the internet offers a zero-cost channel - the internet switched the value-propositions around by reducing cost for sender, but for some kind of content, it simply moves the cost somewhere else

1/  to the receiver (spam/advert/recommend, whatever you call them) -
2/ to the content creator (for music, film.games etc)
3/ to regulator (to ensure neutrality, control monopolistic tendencies etc)
4/ to the service provider as real competition drives profits to truly marginal
5/ somewhere we havn't thought of yet

so what we didn't think about was how to design robust games to allow people to design and choose appropriate system architectures for sustainable worlds, whether journalism (that doesn't let the vocal extreme minority control the agenda) or creative industries (so original work is rewarded), or peer-economic structures like uber, airbnb, etc that treat the means of production/labour force fairly...

hard times

[yes, i know this is sort of a version of jaron lanier's stuff, but it is becoming more and more evident that the complaint is right, but we need an actual fix, and that that is the hard problem, not identifying the cause, but designing the solution]

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home