If this category is not applicable for this technology, please select 'Yes'.
If your technology registers personal data you have to be aware of privacy legislation and the concept of privacy. Personal data can be interpreted in a broad way. Maybe your technology does not collect personal data, but can be used to assemble personal data. If you are collecting special personal data (like health or ethnicity) your should be extra aware.
An example: GriefbotThe General Data Protection Regulation defines personal data as data relating to an identified or an identifiable natural person. Natural persons are living persons, so the GDPR in principle does not apply to deceased persons.
However, our Griefbot is also filled with data of living persons as well, especially those with a close relation to the deceased, to which data the GDPR will apply.
Also the data of the persons that use the griefbot is processed in different ways, and some of these data falls within the scope of the GDPR.
For both of these categories we know that some of the data that feeds the Griefbot contains sensitive categories of data as well. For example, some Griefbots are fueled with mail conversations and that means that a lot of very sensitive personal data can be included.
A lot of people share with their loved ones very sensitive data like passwords and bankaccounts etc. which can be a huge risk of potential hazard.
Are you aware of the main privacy and data principles you need to adhere to? If not, check the links below. Do you think the invasion of privacy of your technology is allowed? Do you think people will feel that their privacy is invaded? Did your think about things like proportionality and subsidiarity?
An example: GriefbotYes, the deceased may not have a say in the creation of his/her griefbot.
Even though deceased, you might want to protect some aspects of your private life after death. In some countries it is e.g. prohibited to defame deceased people, something that can easily happen when, without knowledge and consent, after death a Griefbot is created.
Moreover, not only information about the deceased him/herself can be revealed by a Griefbot, also the data of those that came into contact with the Griefbot. Privacy includes the right to reputation.
There are two dangers in this respect.
First, that the Griefbot is to static and does not let the deceased evolve like he or she would have in real life. E.g. when someone dies in times where same sex marriage is not accepted, in real life he or she might have come to accept this, but the Griefbot does not automatically. Creating an image of the deceased as a conservative and maybe even discriminatory person.
On the other hand, as griefbots are self learning, they can also evolve the deceased into a person he or she was not in real life, e.g. becoming discriminatory on the basis of the information that is fed to the griefbot after death of the actual person.
In Europe (for example) data collections have to be GDPR - compliant. Do you think that your technology is compliant? How do you know? Did you provide for the appropriate measures to make sure compliancy is in place?
An example: GriefbotPrivacy is not the same as data protection.
Privacy as a fundamental human right is embedded in national and international constitutions and conventions. For the EU important pieces of legislation are the Charter of Fundamental Human Rights of the European Union and the European Convention on Human Rights. On the basis of art. 8 of the European Convention on Human Rights an invasion of privacy can be justified, if it is foreseen by law, on the basis of one of the interests mentioned in article 8 (2) and is necessary in a democratic society. This is judged on the basis of 2 principles, proportionality and subsidiarity. Proportionality means that you can reach the purpose with the measures you take and subsidiarity means that no less invasive means are possible.
The GDPR is a very extensive piece of legislation but is based on 8 primary principles: purpose specification, purpose limitation, data minimisation, data quality, security, accountability, transparency and participatory rights. So, to be compliant with privacy and data protection legislations, we strive to design the griefbot with these principles in mind.
Privacy is often implemented by design which means for example that only the bare minimum of data is collected and used and this process is continously improved upon. Do you (really) need all the personal data you are processing to provide the functionality of your technology (data minimalisation)? Did you consider other principals considering privacy by design?
An example: GriefbotIn view of the principle of data protection by design and data protection by default in the development of the Griefbot consideration must be given to measures to mitigate privacy risks. What kind of data do we need to use to give the Griefbot the functionality we perceive. What exactly is our purpose, and what is needed to achieve this? There would be a difference for example if a Griefbot is specific to one person. So only data concerning person A and the deceased are used for the griefbot that is meant for person A, such as emails and Whatsapp conversations between person A and the deceased, in stead of using the entire email database of the deceased and all Whatsapp messages of the deceased with anyone. It also needs to be considered if other persons data, than those of person A and the deceased can be pseudonimized or preferable anonimised. Security is very important so only person A can "feed" the griefbot and has access to the griefbot.
Are you creating data that could follow users throughout their lifetimes, affect their reputations, and impact their future opportunities? Will the data your tech is generating have long-term consequences for the freedoms and reputation of individuals?
An example: GriefbotFor the impact of the data as such, depending on what data exactly will be used to feed the griefbot, we do not see reasons for future impact. As the griefbot functions on the basis of self learning algorithms it might be a problem that the griefbot develops in an unforeseen way, creating a person that no longer corresponds to the actual deceased, displaying behaviour that would not be the behaviour to expect from the actual deceased.
The idea of the technology impact tool is that it stimulates you to think hard about the impact of your technological solution (question 1-5). The answer can lead to changes in design or implementation, which is a good thing. Please list them here.
An example: GriefbotWe are continously working on being compliant with GDPR law. On a more fundamental level, we are exploring two potential improvements.
First we are thinking of a Griefbot that is completely personal, so there is only a one on one relation with the Griefbot. This solves a lot of issues with privacy.
Furthermore we think that if the griefbot is personal and has the sole puropose to help one specific purpose grief, it would make sense that the person using the griefbot has some say of control over the development of the griefbot.
Second we are exploring the option that a deceased person has to give permission for the Griefbot while alive. Just like a donor codicil.