Skip to main content

Who owns the learning?

Submitted by Gianfranco Cecconi
on Wed, 07/01/2020 - 15:40

I've had the opportunity recently to interview Chris O’Neil for the Support Centre for Data Sharing. He's IBM associate general counsel, intellectual property law, specialising in data-related matters and law. We discussed many things, but one stuck in my mind.

My work as a management consultant. My profession is more than one century old, and Arthur D. Little, founded in 1886, is usually recognised to be the first firm in history in this sector. In my job, I support my clients with their issues, solve them, and then go to the next one. Every time, I need to access my clients' confidential information - that I keep secret - but I also learn something from myself. I become more experienced, knowledgeable, skilled, faster.

Well, Chris told me, artificial intelligences do the same. Services like IBM's Watson work on the clients' confidential data, and then go to the next one, and every time they become smarter.

If we learned to accept that for human consultants, if we are ok with the learning "sticking" to them, why should we make a difference for our new friends, the robot consultants? :-)  

Are you comfortable with this idea? What do you reckon?

 

G.

Very interesting point G, 

What are the risks of an algorithm getting smarter, faster, and better? Are they similar to a consultant getting smarter, faster, and better? Actually, learning and development is the basic idea to be able to create value. Enabling humans and machines to learn is much of our focus when we facilitate data sharing.

But is it really so simple? We trust the human consultants to comply with the fine line between abstracting what they have learned to provide value for another party and sharing what they found out harming the subject and/or source of that information. If they don't, it would hurt the business when word gets out. 

We also trust the humans from that other party to not viciously try to extract information not ment for them from the human consultants. It would hurt the relationship questioning their integrity and hence the output. 

However, who has a bad concious when trying to get secret information out of a machine as long as it is anonymous. We have seen what the cover of annonymity and digtal distance does to people's morale in forums and social media channales. As long as visibility and liability is unclear, naturally, our trust is impaired, even if we agree to the concept of applying what we learn - even for our machines. 

Can only time tell or can we facilitate our trust-building somehow by increasing transparancy and solving liability?

E

Also several good points Esther!

To respond to your first point, I think we need to shift our mentality away from what the risks of algorithms are to what we can do when they are smarter, faster and better. That algorithms are growing and have a larger role in society is inevitable with the continuing momentum of Industry 4.0. What we can do is look at how they are being developed and used, and how we as professionals and consultants (and society at large) can develop and 'learn' with them. 

In theory, the output from the algorithms should be impartial. Data is neutral - it's merely a statement of facts and statistics. The type of data that is selected as input and the intentions or code behind the algorithms are what I am more likely not to trust. The data scientists and developers behind the algorithms need to 'learn' as the algorithms get smarter, more accurate and produce results more quickly, and they need to be mindful of the type of data they are putting in and the implications that can have on their analysis.

To more specifically facilitate our trust-building in data sharing, for both the consultants or data scientists and the algorithms, we first need transparency in the decisions being made. This includes, and is not limited to, knowing what type of data is being used, the source of the data, how the data is being used and for what purpose, and who the raw/filtered data and results are being shared with. 

What does this mean for us though? How can we as civilians hold those processing the data and creating the algorithms into account to facilitate transparency and build trust in data sharing? 

Eline