If this category is not applicable for this technology, please select 'Yes'.
To answer this question think about sub questions like: is the technology in line with the person you want to be or propagate? Can the technology be used for stigmatization? Does the technology match the values that you consider important? Does the technology imply a certain belief or does the technology impose it?
An example: GriefbotThere are two kind of users. The deceased person and the 'real' user. In this case we decided to focus on the 'real' user. However, we do explore the opportunity of some kind of a 'donor codicil' in which a person gives permission to live on as a Griefbot.
We believe that too much suffering is wrong. If we can ease suffering by offering a Griefbot than that is in line with what persons want to be: someone who mourns but is helped in the process.
We think that people that believe in heaven can still use the Griefbot, because there is a clear distinction between the soul and digital technology. We are just providing a tool that helps your mourn.
The technology is matching values that are important for people, we think. People live on when they are remembered. Parents or grand parents get a voice. We think that is very valuable.
To answer this question think about sub questions like: is the user supported in making choices or are choices made specifically for the user(s)? Does the technology make the user dependent? On the technology? Or on others? Do they need someone to use it? Can they live without it? Does the technology have an addictive effect?
An example: GriefbotWe think this an issue to be addressed. People live on, circumstances change. It is bad idea to trust a Griefbot to help you make decisions.
We are also very aware that the technology can be addictive. People can get addicted to chatting with the person they miss so much, and start disconnecting from the real world. This might be a serious problem.
We like to think that our Griefbot empowers users to make better choices, because they can mourn with the help of the Griefbot.
To answer this question think about sub questions like:What is the effect of the technoloy on the mental, physical or social health of the user? Does the technology help the user (to do your job better or to feel better) or is the technology actually blocking the user?
An example: GriefbotThe Griefbot helps with the mourning process. However there is a chance that the user loses connection with real life. We believe that the Griefbot helps with the mourning and we monitor that effect closely.
To answer this question it helps to start a brainstorm and find out if you can envision situations in which your technology has these negative effects. You could also interview users or look at user data.
An example: GriefbotOne of the issues we see is that the data on which the Griefbot is based provides surprising new insights for the users. Maybe there are hidden secrets in the data, maybe the Griefbot reaches wrong decisions. This can make the technology confusing or even stressfull.
Extreme use can lead to a disconnection of real life and maybe a worse mourning process instead of a better proces.
The idea of the technology impact tool is that it stimulates you to think hard about the impact of your technological solution (question 1-5). The answer can lead to changes in design or implementation, which is a good thing. Please list them here.
An example: GriefbotFor the real user we decided to build controls in the bot.
1. We inform the user of extreme use;
2. We give the option to restrict access;
3. We monitor Griefbot - usage centrally to see trends;
4. We have a distress-button which the user can use when the Griefbot misbehaves.
For te deceased we are researching the option to formally have someone give permission to use a Griefbot after death.