What happens is that the more popular the code is, the more likely it becomes a target. Operating Systems are a perfect example. MacOS, Windows, iOS, Android - all malicious actor targets precisely because of their ubiquity. Open Source tried to mitigate that, but underestimated the greed in criminality, and overestimated the common sense of the average user.
AI is quietly re-writing computing utility. We're probably still at the "left shoulder" of the Development Curve; everything is still in it's infancy. I've been involved with a couple of Commercial AI implementations with IBM Watson, and it's partly impressive, and laughable at the same time. But the ability of the AI to LEARN is both amazing and frightening. All it needs is more Data. Granted, not raw data, it has to be somewhat structured, but still - the ability to start drawing very accurate inferences of current situation is VERY GOOD now. What needs work is the ability to FORECAST, but that is getting better every day, literally.
So the AI, plus all the data we leak (willingly or not) is part of a larger convergence where our information not only is persistent, but is constantly being evaluated and in some cases, used to make decisions for us, both at a personal, and intimidatingly to some, at a socio-economic and political level.
Given I have children ranging from one who's about to be married, to one who just got out of elementary school - the challenge for them will be to understand how to control their data leakage and still move society forward - because like it or not, the engines of change are nothing that can be stopped - and that change will involve more and more data use and re-use. How it is used, to make the world better, is part of their burden as well as ours.