Fontys

Technology Impact Cycle Tool

Courses Login
Depending on the technology you are assessing, some categories are more important than others. Here you can indicate how important you think this category is for this technology.
If this category is not applicable for this technology, please select 'Yes.'

(Only the gray shaded text is printed on the Quick Scan canvas)

Can you imagine ways that the technology can or will be used to break the law? Think about invading someone's privacy. Spying. Hurting people. Harassment. Steal things. Fraud/identity theft and so on. Or will people use the technology to avoid facing the consequences of breaking the law (using trackers to evade speed radars or using bitcoins to launder money, for example).

(This question is part of the Quick Scan)

An example: Griefbot

Yes, under certain circumstances it can be used to break the law. If a young, underaged person dies and the friend or family member of the deceased gets access to the Griefbot account he or she can use it to draw young kids into dangerous situations, as he or she can impersonate a kid.On the other end, a bad actor can abuse the Griefbot for scamming purposes if he or she can "rewire" the AI behind the bot. Individuals can abuse the trust put into the deceased to manipulate people to undertake illegal activities, e.g. scam them out of money or hurt other people. It is even possible to imagine that a Griefbot will be taken hostage and only returned to the Original owner after paying a lot of money.
If you brainstorm, can you imagine ways that the technology will be used to hurt, bully or harrass individuals? Insult people? Create societal unrest, and so on. Can you imagine the technology being used to cross personal - or societal boundaries? Can you write down examples of ways that this can be done?

An example: Griefbot

The Griefbot of a celebrity (in the broadest sense) can be abused to incite violence or other forms of societal unrest even after the person dies. If Griefbots become more common, Fake Griefbots of celebrities can become a problem. We can imagine that people with strong believe systems have problems with the idea of a Griefbot.
If you brainstorm, can you imagine ways in which the technology can be used to hurt, insult or discriminate against certain societal groups? For example, a facial recognition technology that does not recognize people of certain races. Or a technology that is gender-specific? Can you write down examples or ways that this can be done?

An example: Griefbot

Poor people can be excluded by the pricing of the product, giving them no acces to the memories of their loved ones. Individuals who don't have any proficiency in technology also run the risk of being excluded of use.
If you brainstorm, can you imagine ways in which bad actors abuse the technology to increase the gap between classes, gender or races? Do you see bad actors using the technology to polarize society? Can you write down examples or ways in which this can be done?

An example: Griefbot

The only way we can see the Griefbot pitting people together is between either the family members or friends of the deceased. If a certain group finds some information about the deceased undesirable and doesn't want to believe it, it can create unrest between family members. Also their can be unrest if one side of the family wants the Griefbot and the other does not want to. Fake Griefbots can also create problems, but that is very speculative.
Can you think of ways in which the technology can be used as equivalent of fake news, bots or deepfake videos (for example). Can you write down examples or ways in which this can be done?

An example: Griefbot

After someone's death the Griefbot could be programmed to "hide" certain information, views, motivations or ideas the deceased has. This can potentially be abused to omit bits of the personality of the Griefbot.

(Only the gray shaded text is printed on the Improvement Scan canvas)

If you think about this technology being used to break the law, or avoid the consequences of breaking the law, or to be used against certain groups, or to attack the truth or to pit certain groups against each other. If you think about all of that, what improvements would you (want to) make? In the technology? In context? In use? The answers on questions 1-5 help you to gain insight into the potential impact of bad actors on this technology. The goal of these answers is not to provide you with a 'go' or ' no go' - decision. The goal is to make you think HOW you can mitigate the risks of bad actors. This can be by making changes to the technology or making changes to the context in which the technology is used, or making changes in the way the technology is used.

(This question is part of the Improvement Scan)

An example: Griefbot

Yes. We think the Griefbot is a very personal solution, so we created a very personal two-factor security. You can only use the Griefbot with a double authentication. Also we encrypted all data in our datacenters and made sure security is very high level, so it is also impossible to hack our Griefbots. An eleborate back-up procedures makes sure, that you can always return to the Griefbot of a few weeks/months ago.
Are you satisfied with the quality of your answers? The depth? The level of insight? Did you have enough knowledge to give good answers? Give an honest assessment of the quality of your answers.

Articles

Article with many examples and links of misuse
(https://www.sitra.fi/en/articles/can-technology-misused/)
Some examples of data misuse
(https://www.observeit.com/blog/importance-data-misuse-prevention-and-detection/)

Books

Use and Misuse of New Technologies
(https://www.springer.com/gp/book/9783030056476)