I think we need to make it at minimum 6 figures per person for adequate protection. Once information is out there is no taking it back. Banks, jobs will use all information against you. Electronic survalence is just another form of "Public HR"
sure someone could intentionally post shit online to fuck with these systems but the creators have no remorse for such "people" as they are "degenrate" to try to destroy such "high minded" systems.
If this leak would have cost "6 figures" per person, as you advocate, that would total somewhere around 4 billion euros. If the insurance company has to pay out billions of euros for a single leak, it's going to have to charge pretty hefty premiums for their client. In order to stay profitable, the client has to raise prices for their mental health services. If the end user was previously paying 100 euros to talk to someone about their mental health issues, maybe now they would have to pay 100 euros to talk + 200 euros to cover the insurance premium. Doesn't sound too good, does it?
Now let's imagine what the insurance company does when it's about to get hit for 4 billion euros. Instead of paying out, it's going to hire an army of lawyers who are going to make a convincing argument that, actually, this was not a data leak at all, this was [something else not covered by the insurance agreement]. We've already seen this with all the "cybersecurity insurance" products, which are basically scams.
Many liability risks are not passed onto customers. Banks, for instance, if they used your logic, would charge you extra for their own incompetence if they were unusually frequent victims of fraud. What happens instead is these industries filter for competence, as table stakes for participating in them.
> Many liability risks are not passed onto customers. Banks, for instance, if they used your logic, would charge you extra for their own incompetence if they were unusually frequent victims of fraud. What happens instead is these industries filter for competence, as table stakes for participating in them.
Funny that you would choose banking as an example. Banks are in fact very frequently victims of fraud. Despite this, banks are (generally) profitable. Why? Because the cost of fraud is passed on to clients of banks (typically businesses), who pass on the cost to their customers.
I mean sure, a bank that is "unusually frequent" victim of fraud will be unable to stay in operation, but a bank that is suffering the "average amount of fraud" will stay in operation just fine and pass those costs onto their clients.
In summary, liability risks _are_ passed onto customers, and you are wrong when you claim otherwise.
Why should a private insurance company be allowed to skim a profit off this? The government should be on the hook directly, with careers ended when the taxpayer has to compensate these people for their injury.
If you want fire insurance for a factory, the insurers will inspect what you're doing, the safety precautions you have in place, your testing regimes and so on - and charge you more (or refuse to insure you at all) if they don't like what they see.
And as there are multiple insurers you get a competitive market - meaning the insurers who are best at spotting real problems prosper, while the insurers who miss problems or worry about non-problems are less profitable.
And if a company can't get insurance at all it's not because one guy was being a hardass - they've had a bunch of chances to convince different insurers, all of whom have refused, rather than blame for them going out of business falling on some government agency.
This is appealing to people who love free markets and small government, as there are multiple competing insurers, and all the inspections, monitoring and even the payouts happen at no cost to the government.
> Banks, jobs will use all information against you
In the EU and Finland there are laws regulating what private data banks or companies can use or collect.
For example, companies are forbidden from googling job applicant without their permission or looking at their social media. They also can't by data from data collectors like they do in the US.
In the US company can buy your information from data brokers. It contains your social networks, opinions etc. In EU doing that would be huge risk and it's not generally done.
Just because there are loopholes and regulations can be violated does not make regulation pointless. It directs behaviour and what is considered acceptable.
I think there are two ways to manage these types of issues.
1. Bring it fully legal and have a large impact on how it is done in cooperation with government. It could be beneficial to allow government insight so that it can prepare the general public about what is going on or how society might reflect on it. In general I believe we should be aware of all of the things this data can do. If during full disclosure people want this data regulated so be it.
2. Criminalize it (hard mode). It looks like with GDPR it will be criminalized and it will rely on companies using good faith on acquiring data like this. It will be regularly impossible to defeat all criminal actions but there will be no question who has the athority on such measures.
With regards to both methods i see huge problems in the public understanding who is using and how the information is used. So it seems for now there is a few options left. One of which is to restrict knowledge and keep good people in power with the opportunity to use this data. Even with that we fail daily.
everything is a struggle but perhaps this issue might shape how humans interact with eachother in the future the most.
That's the point with GDPR; you can't just start using personal data unless the person has given explicit permission for that data being used for that specific purpose. That applies to data from outside sources as well.
This is unfortunately not true for health data. There is an exemption for those that those can be collected for research purposes if a member entity legalize that. That is the case in Germany for people who are not privately insured. Their health data is centrally collected for research purposes.
Yes, they can't be signed away. Nuances matter of course, but regarding data privacy rights the general situation is that if you sign some contract with a clause 'agree to X or we won’t give you service' then that clause is simply invalid as it conflicts with the law and is not binding. If a company would use that data, then they can be fined for using that data without consent since that is not valid (freely given) consent.
I don't think it would be feasible to start at 6 figures--I think we would have to start lower and raise over time. If you start at 6 figures, a single breach can land a company well into the billions, and insurance premiums would be way too high for corporations to stay in business. I know there are a lot of "well good, fuck the corporations" sentiments out there, but these are corporations which can be economically viable and securely protect consumer data if they are given some time to improve their security. We absolutely should walk the price up over time, but let's give people some time to develop and implement a security competency within their organization (not to mention growing a security auditing competency sufficient to handle the scale of all businesses) before imposing ruinous insurance premiums.
Sure but you could consider the cost to the actual individual in the price of depressed wages, deteriorated personal relationships ect. They wont get a penny of it unless outlined by law.
Of course. My point was that we must also consider feasibility--we should absolutely get to a state in which corporations should bear the full cost for their security decisions; however, we probably won't be able to get there overnight.
I suspect this is going to end up as one of the most expensive GDPR fines ever (edit: as a max-fine case, not neccesarily in sheer numbers). Furthermore, many individuals have had extremely sensitive data leaked publicly, and they could sue individually for damages.
In other words: It could end up being insanely expensive.
Yes, and the maximum fine is 20M€. It'll probably be quite enough to take them down. The individual payouts of any lawsuits would probably end up as five-figure numbers, so if the hacker is telling the truth about then we'd end up with at least 400M€, which in my opinion is quite insane by Finnish standards. The number of victims here is quite insane after all.
sure someone could intentionally post shit online to fuck with these systems but the creators have no remorse for such "people" as they are "degenrate" to try to destroy such "high minded" systems.