Humans are full of conscious and unconscious biases. For example, a 2012 study in Quebec showed that in considering equally qualified and skilled candidates, those with last names like Ben Saïd were 35 per cent less likely to be called back for an interview than those with last names like Bélanger.
Our machines are learning from this data. They are being taught through AI systems that in fact “Bélangers” are more qualified than “Ben Saïds.” So, as we use AI to predict recidivism in the criminal justice system, to determine loan eligibility or for job application screening, we are further embedding systemic discrimination in our institutions. This is unfair and unethical. It is also a great economic loss. One solution is to teach machines in a similar way to the human brain.
Source: Montreal Gazette
In the first episode of the new Netflix series, My Next Guest Needs No Introduction with David Letterman, former President Barack Obama reflects on political tensions and life after the White House, and Dave visits Selma with Representative John Lewis.
Obama: One of the biggest challenges we have to our democracy is the degree to which we don’t share a common baseline of facts. What the Russians exploited (but was already here) is that we operate in completely different information universes.
Letterman: I was under the impression that Twitter would be the mechanism by which truth was told around the world.
Obama: If you are getting all your information off algorithms being sent through a phone, and it’s just reinforcing whatever biases you have, which is the pattern that develops… And that gets more reinforced over time.
That’s what’s happening with these Facebook pages where more and more people are getting their news from. At a certain point you just live in a bubble. And that’s part of why our politics is so polarized right now. I think it is a solvable problem, but I think it’s one that we have to spend a lot of time thinking about.
Letterman: It seems like a valuable tool that has turned against us.
Recode’s Kara Swisher said last year that journalists needed to be tougher on serial liars in tech and politics. Last month, Swisher returned to Recode Media with Peter Kafka to grade whether the media lived up to that goal in 2017 — and the impact of the Silicon Valley companies whose platforms distribute most of their content.
Swisher is frustrated by the unwillingness of tech leaders to accept their share of responsibility in the media space, and not because they’re blind to the problem.
“I think they know that these platforms are being badly misused, and they don’t know what to do about it,” Swisher said. “I think it was a slow burn, a slow dawning on them. The penny dropped really lugubriously.”
On the new podcast, Swisher also shares why she’s more impressed by Snapchat CEO Evan Spiegel than by his peers, why Silicon Valley isn’t thinking about AI’s potential for reinforcing bias, and why she’s tired of tech’s perpetual-victim mentality.
Source: Recode (podcast)
Hours after posting his memorial, he got an email letting him know how his post was doing, and telling him that three people had recommended it. Inserted in that email was the headline he had written for his post, “In Remembrance of Elizabeth,” followed by a string of copy: “Fun fact: Shakespeare only got 2 recommends on his first Medium story.” It’s meant to be humorous — a light, cheery joke, a bit of throwaway text to brighten your day. If you’re not grieving a friend, that is.
The bill, the first measure of its kind in the country, addresses machine bias based on age, race, gender, sexual orientation or other status. NYC is not the first entity to publicly note potential machine bias. If passed into law, the bill could set a precedent for US cities to prioritize appropriate development and monitoring of AI systems.
Source: Smart Cities Dive
What we call AI is just the next stage of us weaving our intelligence together into a greater whole. If you think about the internet as weaving us together, transmitting ideas, in some sense an AI might be the equivalent of a multi-cellular being and we’re its microbiome, as opposed to the idea that an AI will be like the Gollum or the Frankenstein. Our fears ultimately should be of ourselves and other people.
The people who run Amazon, Facebook, and Google are generally good people, with honorable intentions. The problem is that once they became public companies, responsible to shareholders, their freedom of action was radically curtailed.
However much “creating community” they want to do for the world has to happen under scrutiny from shareholders, who want the share price high and rising. To do the right thing morally and ethically can easily require cutting into profits.
Policing hate speech and reducing anti-community behavior on Facebook inevitably will involve shutting down accounts, preventing posts, and in general pushing people and content off the site. That will reduce page views, the engine for ad sales. That, in turn, could cut into Facebook’s astonishingly high earnings.
Source: David Kirkpatrick via Techonomy
Work. It’s exhausting and counterintuitive to the creative problem solving that once drove American innovation. What happened?
Algorithms, that’s what. Google made it fashionable to boost the bottom line with them. Now, they’re little more than a way to save on labor. Tim O’Reilly describes this turn of tech to the dark side as “a very dangerous time.”
How can algorithms give us more creative control over our work schedules? Or make it easier to collaborate remotely? How can they build trust and transparency, or fit 80 hours of work into 40? The world is smaller, our brains are bigger. It’s time we made an algorithm for working smarter.
The founder of O’Reilly Media has a huge influence on the role of tech in our lives, including the future of work. Now, he’s set his sights on job creators and “innovators” he thinks are more interested in making a buck than building the products and services the world can use.
Is O’Reilly right? Have we accepted the future as an extension of the past? How can a more sustainable workforce ensure an abundant future? Fair questions for a society on the brink of an automation apocalypse.
Don’t fret. O’Reilly says, “It’s still possible to reinvent the world. If we could make a more inclusive world with this technology, that would be a great gift.”
Dirty algorithms. Gladden Pappin says, “The filters which prevent web searches from going astray give you no hint about what course of action is virtue and what is vice.”
Pappin’s subtle indictment of algorithmic bias invokes Upton Sinclair’s critique of the contradictions of journalism, with its newsroom of “subordinates drifting inevitably toward the point of view held by their masters.”
For Pappin, varied views are fine, if taken in moderation. Because “when freedom equals unlimited choice, and when technology abolishes limits and with them purpose, everyone winds up having to make or discover the rules himself.”