Depending on the technology you are assessing, some categories are more important than others. Here you can indicate how important you think this category is for this technology.
If this category is not applicable for this technology, please select 'Yes.'
(Only the gray shaded text is printed on the Quick Scan canvas)
If this technology registers personal data you have to be aware of privacy legislation and the concept of privacy. Think hard about this question. Remember: personal data can be interpreted in a broad way. Maybe this technology does not collect personal data, but can be used to assemble personal data. If the technology collects special personal data (like health or ethnicity) you should be extra aware.
(This question is part of the Quick Scan)
An example: GriefbotThe General Data Protection Regulation defines personal data as data relating to an identified or an identifiable natural person. Natural persons are living persons, so the GDPR in principle does not apply to deceased persons.
However, our Griefbot is also filled with data of living persons as well, especially those with a close relation to the deceased, to which data the GDPR will apply.
Also the data of the persons that use the griefbot is processed in different ways, and some of these data falls within the scope of the GDPR.
For both of these categories we know that some of the data that feeds the Griefbot contains sensitive categories of data as well. For example, some Griefbots are fueled with mail conversations and that means that a lot of very sensitive personal data can be included.
A lot of people share with their loved ones very sensitive data like passwords and bankaccounts etc. which can be a huge risk of potential hazard.
Are you aware of the main privacy and data principles you need to adhere to? If not, check the links below. Do you think the invasion of privacy by the technology is allowed? Do you think people will feel that their privacy is invaded? Does the technology consider things like proportionality and subsidiarity?
An example: GriefbotYes, the deceased may not have a say in the creation of his/her griefbot.
Even though deceased, you might want to protect some aspects of your private life after death. In some countries it is e.g. prohibited to defame deceased people, something that can easily happen when, without knowledge and consent, after death a Griefbot is created.
Moreover, not only information about the deceased him/herself can be revealed by a Griefbot, also the data of those that came into contact with the Griefbot. Privacy includes the right to reputation.
There are two dangers in this respect.
First, that the Griefbot is to static and does not let the deceased evolve like he or she would have in real life. E.g. when someone dies in times where same sex marriage is not accepted, in real life he or she might have come to accept this, but the Griefbot does not automatically. Creating an image of the deceased as a conservative and maybe even discriminatory person.
On the other hand, as griefbots are self learning, they can also evolve the deceased into a person he or she was not in real life, e.g. becoming discriminatory on the basis of the information that is fed to the griefbot after death of the actual person.
In Europe data collections have to be GDPR - compliant. Do you think that the technology is compliant? How do you know? Did the technology provide for the appropriate measures to make sure compliancy is in place?
An example: GriefbotPrivacy is not the same as data protection.
Privacy as a fundamental human right is embedded in national and international constitutions and conventions. For the EU important pieces of legislation are the Charter of Fundamental Human Rights of the European Union and the European Convention on Human Rights. On the basis of art. 8 of the European Convention on Human Rights an invasion of privacy can be justified, if it is foreseen by law, on the basis of one of the interests mentioned in article 8 (2) and is necessary in a democratic society. This is judged on the basis of 2 principles, proportionality and subsidiarity. Proportionality means that you can reach the purpose with the measures you take and subsidiarity means that no less invasive means are possible.
The GDPR is a very extensive piece of legislation but is based on 8 primary principles: purpose specification, purpose limitation, data minimisation, data quality, security, accountability, transparency and participatory rights. So, to be compliant with privacy and data protection legislations, we strive to design the griefbot with these principles in mind.
Privacy is often implemented by design which means, for example that only the bare minimum of data is collected and used and this process is continually improved upon. Does this technology (really) need all the personal data it is processing to provide the functionality of the technology (data minimalisation)? Did the technology consider other principles considering privacy by design?
If needed you can check the crashcourse on privacy by design.
An example: GriefbotIn view of the principle of data protection by design and data protection by default in the development of the Griefbot consideration must be given to measures to mitigate privacy risks. What kind of data do we need to use to give the Griefbot the functionality we perceive. What exactly is our purpose, and what is needed to achieve this? There would be a difference for example if a Griefbot is specific to one person. So only data concerning person A and the deceased are used for the griefbot that is meant for person A, such as emails and Whatsapp conversations between person A and the deceased, in stead of using the entire email database of the deceased and all Whatsapp messages of the deceased with anyone. It also needs to be considered if other persons data, than those of person A and the deceased can be pseudonimized or preferable anonimised. Security is very important so only person A can "feed" the griefbot and has access to the griefbot.
Is the technology creating data that could follow users throughout their lifetimes, affect their reputations, and impact their future opportunities? Will the data the technology is generating have long-term consequences for the freedoms and reputation of individuals?
An example: GriefbotFor the impact of the data as such, depending on what data exactly will be used to feed the griefbot, we do not see reasons for future impact. As the griefbot functions on the basis of self learning algorithms it might be a problem that the griefbot develops in an unforeseen way, creating a person that no longer corresponds to the actual deceased, displaying behaviour that would not be the behaviour to expect from the actual deceased.
(Only the gray shaded text is printed on the Improvement Scan canvas)
If you think about this technology invading someone's privacy or collecting personal data and if you think about the way this technology is compliant with prevailing law and mitigates dataprotection risks and concerns. If you think about all that, what improvements would you (want to) make? In the technology? In context? In use?
The answers on questions 1-5 help you to gain insight into the relation between privacy and and this technology. The goal of these answers is not to provide you with a 'go' or ' no go' - decision. The goal is to make you think HOW you can use these insights to improve the use of the technology. This can be by making changes to the technology or making changes to the context in which the technology is used, or making changes in the way the technology is used.
(This question is part of the Improvement Scan)
An example: GriefbotWe are continously working on being compliant with GDPR law. On a more fundamental level, we are exploring two potential improvements.
First we are thinking of a Griefbot that is completely personal, so there is only a one on one relation with the Griefbot. This solves a lot of issues with privacy.
Furthermore we think that if the griefbot is personal and has the sole puropose to help one specific purpose grief, it would make sense that the person using the griefbot has some say of control over the development of the griefbot.
Second we are exploring the option that a deceased person has to give permission for the Griefbot while alive. Just like a donor codicil.
Are you satisfied with the quality of your answers? The depth? The level of insight? Did you have enough knowledge to give good answers? Give an honest assessment of the quality of your answers.