Given the increasing power and popularity of algorithms, how can we ensure that computers act in the best interest of humanity.
Algorithms specialize in prediction like the financial market movement, pay back loans on time or with penalty, fraud users, what kind of advertisements will attract the person and many more. Let’s look at this in a more social way. Almost every one of us is using Facebook these days and we see a lot of unwanted yet relevant ads on our timeline. This is being done with the help of predictive algorithms which analysis our minute by minute activity and displays relevant advertisements. As much as this is helping in increasing Facebook engagement it also produces outrage and violence. How do we ensure that the patterns that are being identified do not lead to flash crashes in the financial sector or outrage among people in general.
How do we make sure that the algorithm acts fairly? To begin with, we need to define what ‘fair’ means. Exploring various statistical definition in this field, a few questions have come up. Should a risk profiler, for example, treat all groups equally, regardless of their other differences? Should it acknowledge differences, but focus on achieving similar error rates? Should it correct for previous wrongs? Are some definitions good in the short term but have negative long-term effect?
The UC Berkeley computer scientist Moritz Hardt and colleagues started working on a model explore different definitions of fairness on lending and credit scores. They found that in some cases, the process that have been put in place to protect minority groups may be harmful in the long run. Specially, if you are creating two different groups, it is most likely that the results are not accurate.
The findings of UC Berkeley challenges the tech giants and others to take responsibility for the effects that predicting can have on society. It certainly won’t be easy especially with tech giants like Google and facebook generating revenues by finding ways to make people look at the ads.
The issue with algorithms predicting and coming out with decisions is done through unsupervised and predictive learning where the system can learn through observation and then begin to make predictions. This is something we as humans do naturally- like we didn’t have to go to any university to learn how to use a pen whereas computers still can’t do this, all advances in Machine Learning and AI are still being driven by supervised learning. As long as developers can supervise learning, AI will be intelligent and careful enough to not cause harm to the society.
Having said that, there has been considerable progress on algorithm prediction. It isn’t something that will just happen overnight. It will require patience and more coding. Let’s hope it won’t take a societal flash crash to convey a sense of urgency.