Google tag (gtag.js)

Learning algorithms, data silos and their hidden impact on society

The dream of every AI professional is to work in an environment with lots of data. More data results in better models and therefore a better world. ML algorithms optimise their predictive model and characterise the variation within a data set. So far so good. But then you start to think about the relation between algorithmic optimisation and their impact on people’s lives. Am I optimising for the right outcome? What is the right outcome?
Some examples:

  • Education: Should your algorithm optimise for the obtained CITO score or the social and emotional well-being of a pupil?
  • Healthcare: How can you optimise the treatment duration based on data? Should you?
  • Finance: Is it possible to recommend the most ideal mortgage for an individual based on similar cases in the past?
  • Government: What constitutes good service? Do you prefer to get a short first response quickly or a slower, but more comprehensive response? Is it possible to classify this automatically?

The dilemma between privacy and social importance often plays a role: how much private information do you want to give up to improve society as a whole? The current trend is that data continues to be better protected and secured, but this at odds with improving processes based on data. For the sake of the sensitivity of the data, mathematical guarantees regarding anonymity are essential. In some areas it is the primary barrier for adoption and impacts whether data sources may be combined and mined. Last but not least, does it make a difference whether the data is processed by actual people or just an impersonal ‘algorithm’?

I would like to take you into the wonderful world of learning algorithms, data silos and their unseen impact on our daily lives.