15 things to know about Apple Card

After Steve Wozniak’s reply to Dave’s tweet was picked up by @CNN, we put some thoughts together in a long series of tweets. We publish them here.

1/ What we know and don’t know about @AppleCard, @Apple and @GoldmanSachs. A joint thread with with @hekiwi1, my partner @sonderscheme

2/ VIPs like @dhh and @stevewoz can get platforms like @Apple and regulators like @nydfs to pay attention to algorithmic bias. Their actions are important and commendable. But the rest of the world needs to be able to escalate too.

3/ Anxiety. Algorithmic management creates algorithmic anxiety. Why is the AI saying that? How can I change the algorithm’s decisions? Who can I talk to? Can I make my own choices? Have I lost my freedom?

4/ Anger. Algorithmic anxiety leads to algorithmic anger. What do you mean “it’s the algorithm?” I want a person to fix this. What do you mean you can’t? https://artificiality.substack.com/p/apple-card-and-algorithm-anger 

5/ The problem might be the algorithm. Algorithms are trained on data. All raw data includes biases because our human world is biased. The question is whether the GS data scientists removed biases during training.

6/ AI starts with data inputs. @dhh says that he and his wife used the same income as inputs. That implies that there was nothing submitted by them that would explain a 20x difference. AI systems will use third party data…but we don’t know what those are in this case.

7/ AI also makes inferences. The algorithm may not know an applicant’s gender but it can infer gender from other inputs. It can also infer other protected class information like race. It’s unclear yet whether this is happening @GoldmanSachs.

8/ None of the above issues are the algorithm’s fault. It is doing what it is told to do. The responsibility for AI errors is always with the humans in charge. The human errors were either intentional or unintentional.

9/ Intentional errors. We think it’s likely that GS didn’t intentionally make the AI sexist. But you never know. A @Facebook engineer intentionally biased the newsfeed to politics, tragedy and crime. And no one knew for a while @nxthompson https://www.wired.com/story/facebook-mark-zuckerberg-15-months-of-fresh-hell/

10/ Unintentional errors. These errors are hopefully caught in testing. But that doesn’t seem to be the case here. Which is why we @sonderscheme help companies create governance systems to monitor, measure and manage AI once it is in the wild.

11/ Probabilities. AI is based on probabilities. These probabilities create either false positives or false negatives — aka errors. You have to assume this will happen and prepare for it (human intelligences make lots of errors too and we prepare for that). 

12/ AI Governance. Companies need governance programs for AI design, development and management. From the starting point of data collection through to managing errors—in this case crisis management. 

13/ @Apple’s new world. Apple’s brand is about designing for the customer. And, recently, about being the responsible big tech company and an advocate for equality and diversity. This issue is a big marketing problem because it is in direct conflict of what the company stands for.

14/ A “no” from Apple. Apple’s products have always been available to anyone who can and will pay for them. @AppleCard is different. This is the first product that isn’t available to all. It is the first product people want from Apple that the company will refuse to provide to some.

15/ @AppleCard is marketed as a product from Apple, not a bank. Being an Apple product means being better than the rest. In this case, better means removing historical discrimination in credit. 

Photo by Melvin Thambi on Unsplash

Share on email
Share on facebook
Share on linkedin
Share on twitter