Depending on the technology your assessing, some categories are more important than others. Here you can indicatie how important you think this category is for this technology.
If this category is not applicable for this technology, please select 'Yes'.
To answer this question think about sub questions like: can the technology be perceived as stigmatising? Does the technology imply or impose a certain belief or world view? Does the technology affects user(s) dignity? Is the technology in line with the person the user wants to be perceived as?
(This question is part of the Quick Scan)
An example: GriefbotThere are two kind of users. The deceased person and the 'real' user. In this case we decided to focus on the 'real' user. However, we do explore the opportunity of some kind of a 'donor codicil' in which a person gives permission to live on as a Griefbot. We believe that too much suffering is wrong. If we can ease suffering by offering a Griefbot than that is in line with what persons want to be: someone who mourns but is helped in the process.
We think that people that believe in heaven can still use the Griefbot, because there is a clear distinction between the soul and digital technology. We are just providing a tool that helps your mourn.
The technology is matching values that are important for people, we think. People live on when they are remembered. Parents or grand parents get a voice. We think that is very valuable.
To answer this question think about sub questions like: does the technology allow the user(s) to make their own decisions or are decisions made for the user? Does the technology make user(s) dependent? On the technology? Or on others? Do users need someone else to use the technology?
An example: GriefbotWe think this an issue to be addressed. People live on, circumstances change. It is bad idea to trust a Griefbot to help you make decisions.
We are also very aware that the technology can be addictive. People can get addicted to chatting with the person they miss so much, and start disconnecting from the real world. This might be a serious problem.
We like to think that our Griefbot empowers users to make better choices, because they can mourn with the help of the Griefbot.
To answer this question think about sub questions like: can the technology be confusing, stressful, manipulative, distracting or frightening? Can the technology cause pain or injuries? What are effects of extreme use?
An example: GriefbotThe Griefbot helps with the mourning process. However there is a chance that the user loses connection with real life. We believe that the Griefbot helps with the mourning and we monitor that effect closely.
f you think about the impact of this technology on human values & needs. If you think about how this technology affects the identity of the user, the autonomy of the user (can the users make their own decisions?) and the health and wellbeing of the user. If you think about all that, what improvement would you (want to) make to this technology?
The idea of the technology impact tool is that it stimulates you to think hard about the impact of this technological solution when it comes to human values and needs (question 1-3). The answers can lead to changes you (want to) make in the design or implementation of this technology, which is a good thing. Please list them here
(This question is part of the Improvement Scan)
An example: GriefbotFor the real user we decided to build controls in the bot.
1. We inform the user of extreme use;
2. We give the option to restrict access;
3. We monitor Griefbot - usage centrally to see trends;
4. We have a distress-button which the user can use when the Griefbot misbehaves.
For te deceased we are researching the option to formally have someone give permission to use a Griefbot after death.
Are you satisfied with the quality of your answers? The depth? The level of insight? Did you have enough knowledge to give good answers? Give an honest assessment of the quality of your answers.