Depending on the technology your assessing, some categories are more important than others. Here you can indicatie how important you think this category is for this technology.
If this category is not applicable for this technology, please select 'Yes'.
Can you imagine ways that this technology can or will be used to break the law? Think about invading someone's privacy. Spying. Hurting people. Harassment. Fraud/identity theft and so on. Or will people use this technology to avoid facing the consequences of breaking the law (using trackers to evade speed radars or using bitcoin to launder money, for example).
(This question is part of the Quick Scan)
An example: GriefbotYes, under certain circumstances it can be used to break the law. If a young, underaged person dies and the friend or family member of the deceased gets access to the Griefbot account he or she can use it to draw young kids into dangerous situations, as he or she can impersonate a kid.
On the other end, a bad actor can abuse the Griefbot for scamming purposes if he or she can "rewire" the AI behind the bot. Individuals can abuse the trust put into the deceased to manipulate people to undertake illegal activities, e.g. scam them out of money or hurt other people.
It is even possible to imagine that a Griefbot will be taken hostage and only returned to the Original owner after paying a lot of money.
If you brainstorm, can you imagine ways that this technology will be used to hurt, bully or harras individuals? Insult people? Create societal unrest, and so on. Can you write down examples or ways that this can be done?
An example: GriefbotThe Griefbot of a celebrity (in the broadest sense) can be abused to incite violence or other forms of societal unrest even after the person dies. If Griefbots become more common, Fake Griefbots of celebrities can become a problem.
We can imagine that people with strong believe systems have problems with the idea of a Griefbot.
If you brainstorm can you imagine ways in which this technology can be used to hurt, insult or discriminate certain societal groups. For example a face recognition technology that does not recognize people of certain races. Or a technology that is genderspecific? Can you write down examples or ways that this can be done?
An example: GriefbotPoor people can be excluded by the pricing of the product, giving them no acces to the memories of their loved ones.
Individuals who don't have any proficiency in technology also run the risk of being excluded of use.
If you brainstorm can you imagine ways in which this technology increases the gap between classes, gender or races? Do you see bad actors using this technology to polarize society? Can you write down examples or ways in which this can be done?
An example: GriefbotThe only way we can see the Griefbot pitting people together is between either the family members or friends of the deceased. If a certain group finds some information about the deceased undesirable and doesn't want to believe it, it can create unrest between family members.
Also their can be unrest if one side of the family wants the Griefbot and the other does not want to.
Fake Griefbots can also create problems, but that is very speculative.
Can you think of ways in which this technology can be used as equivalent of fake news, bots or deepfakevideos (for example). Can you write down examples or ways in which this can be done?
An example: GriefbotAfter someone's death the Griefbot could be programmed to "hide" certain information, views, motivations or ideas the deceased has. This can potentially be abused to omit bits of the personality of the Griefbot.
If you think about this technology being used to break the law, or avoid the consequences of breaking the law, or to be used against certain groups, or to the attack the truth or to pit certain groups against each other. If you think about all that, what improvements would you (want to) make to this technology?
The idea of the technology impact tool is that it stimulates you to think hard about the impact of this technological solution when it comes to the role of bad actors (question 1-5). The answers can lead to changes you (want to) make in the design or implementation of this technology, which is a good thing. Please list them here.
(This question is part of the Improvement Scan)
An example: GriefbotYes. We think the Griefbot is a very personal solution, so we created a very personal two-factor security. You can only use the Griefbot with a double authentication.
Also we encrypted all data in our datacenters and made sure security is very high level, so it is also impossible to hack our Griefbots. An eleborate back-up procedures makes sure, that you can always return to the Griefbot of a few weeks/months ago.
Are you satisfied with the quality of your answers? The depth? The level of insight? Did you have enough knowledge to give good answers? Give an honest assessment of the quality of your answers.