Your organisation can protect individual rights related to automated decision-making and profiling, particularly where the processing is solely automated with legal or similarly significant effects.
Your organisation takes extra care with extra checks for sensitive personal information or vulnerable groups, such as children, or all automated decision-making and profiling activities.
Your organisation only collects the minimum information required and has a clear retention policy for the profiles created.
Where possible, your organisation uses solely automated decisions that have legal or similarly significant effects on individuals, there is a recorded process for those decisions. If this applies, your organisation should consider carrying out a privacy risk assessment, such as a Data Protection Impact Assessment (DPIA).
Where the decision is solely automated and has legal or similarly significant effects on individuals, a recorded process allows simple ways for individuals to request a review such as human intervention, express their opinion and challenge a decision.
There's a process in place to conduct regular checks for accuracy and bias to ensure that systems are working as intended, and to feed this back into the design process.
Do team members and customers find your organisation's retention policy clear?
Would team members state that there are effective processes to protect rights relating to automated decision-making and profiling?
Would individuals say that the organisation made it easy to request human intervention, express their opinion, and challenge a decision?