On the night after the first Democratic debates,
Tulsi Gabbard was the single most Google-searched
candidate by far .
A perfect opportunity for the new candidate
to raise crucial donations and climb up in
polls in order to qualify for the next debates.
But that would be too easy.
In the heat of Tulsi’s massive surge in
popularity, Google suspended her advertising
account for 6 hours and so drastically limited
her ability to direct newcomers to her campaign website.
Tulsi is asking for $50 million to compensate
Google’s response?”We have automated systems
that flag unusual activity on all advertiser
accounts… …and we do so without bias toward
any party or political ideology.”
They have an algorithm and it is unbiased.
The classic argument goes that machine-learning
algorithms are mathematical, and by their
very nature, neutral and unbiased.
But this unchecked theoretical view of engineers
in Silicon Valley crumbles in reality.
In our reality, algorithms reinforce biases
they learn about from their training data.
The new invisible hand of the modern discourse
are the machine-learning algorithms that are
used by tech companies to recommend us shopping
items, organize social media feeds, or personalize search results.
These algorithms start off with a small set
of very simple instructions and programmers
then feed them pools of data to learn from
on their own.
Machine-learning algorithms are good at navigating
complexity – much more efficiently than humans.
They can quickly skim through large databases
and prioritize certain values over the others.
Today, algorithms are increasingly more often
entrusted witdh critical decision-making,
including court sentencing, granting loans
and even hiring for jobs and academic placements. 
But there is a catch.
Much of the development and implementation
of algorithms happens in secret.
Their formulas are proprietary and users rarely
get to even know the variables that make up their equations.
Often times machine-learning algorithms make
decisions that not even their developers can
understand why and how they arrived to them
and yet they just seem to work. 
But mathematics cannot solve everything.
The result of machine-learning algorithms
is solipsistic homogeneity – a process of
finding associations, grouping data into categories
and creating a structure of sameness.
The training data is always paramount to any
If social or political biases exist within
that data, the algorithm is most likely going
to incorporate them.
Often times, it’s the historical data that
carry negative social footprint into the automation.
In 2018, Amazon was looking for a way to automate
its hiring system.
To recruit new engineers more quickly, an
artificial intelligence was developed.
The system would scan through past resumes
and search for the best candidates on the web.
But because the historical data showed predominantly
male resumes, the AI “learned” men are
preferred to women.
The algorithm automatically downgraded all
CVs with the term “women’s” or attending women-only schools.
When Amazon learned about this they tried
to repair the algorithm but soon they found
out no matter what they did, it would always
find new forms of bias.
So they decided to kill the algorithm and
return to the traditional hiring methods. 
Similar to Amazon’s hiring AI, Google’s
advertising algorithm also mirrored cultural
biases of historical data.
A study found that the system shows ads for
high-income jobs to men disproportionately
more often than it does to women. 
In other cases, users can attempt to feed
the algorithm with biased information and
manipulate its outcome.
Not so long ago Google Search autosuggest
feature used to rely heavily on user-input data.
Until users learned how to easily game the
system to manipulate its rankings or just
to troll the search engine with a cesspool
So Google made a decision to drastically interfere
with its search algorithm removing entire
dictionaries of non-advertiser friendly terms. 
Artificial intelligence is also used to predict
criminal behavior that judges rely on to determine
But not even this realm is immune to algorithmic
One such widely used algorithm flagged African
Americans as higher risk although they didn’t
re-offend twice as mush as white Americans.
Similarly, white Americans were labeled lower
risk but did re-offend twice as much as African Americans. 
Machine-learning algorithms are still very
weak at understanding nuances of human language.
Under the pressure from advertisers, YouTube
cracked down on extremist content by automatically
flagging and demonetizing videos containing
a whole vocabulary of key words.
But the algorithm is not capable of differentiating
between content that is truly extremist and
one that is educational or merely reporting
YouTube’s workaround was to give mainstream
media an exclusive pass, automatically alienating
independent creators and journalists in the
process.  
The success of machine-learning algorithm
stands and falls on the availability of good
The catch is there will always be less information
about minorities which will always lead to
higher likelihood of invalid statistical patterns
about minorities. 
A perfect manifestation of this reality,
is Amazon’s facial recognition tool that
misidentified women for men 19% of the time
and brown and black women for men up to third of the time.  
Not always is it the algorithm that should
be blamed for all the bias.
Sometimes corporate or organizational interest
of its creators can hugely interfere with its delivery.
As Google grew to become a dominant search
engine worldwide, it slowly began offering
more and more services that directly competed
with the market of providers that relied on
Google search to reach their customers.
[5 a,b] When the company launched Google Finance,
it began prioritizing it over the organic
search results for relevant key words, even
though Yahoo Finance claimed the title of
being most popular among users.
This practice then expanded to Google Health,
Google Reviews, Maps,video, travel and bookings and email.
Prioritizing its own products allowed Google
to steal up to 34% of the search traffic.
Now that percentage is even higher, as Google
Search offers instant answers and a wider
range of Google products that make users stay
on Google longer and thus generate more ad
revenue for the company.   
This is not a critique of whether Google should
be allowed to push its own products as a private
Rather, it’s to show yet another vector
for bias to sneak into the algorithm and show
that its search engine is not as neutral as
Google would have you believe.
Corporate bias is a powerful factor.
And corporate bias is especially important
to political insiders.
Long time Google Executive Eric Schmidt has
been working hand-in-hand with the Democratic
party, both with Obama and Hillary Clinton
There was a lot of effort from Google insiders
trying to get Hillary Clinton elected.
This included implementing features that would
manipulate Latino vote in key states or investing
in startups and groups that would support
Clinton campaign with technology, data and advertising.  
Tulsi Gabbard probably doesn’t enjoy the
same level of insider connection with one
of the most influential tech companies in the world.
So whether temporary suspension of her account
in a critical moment was just an error of
the algorithm or was intentional, is a speculation
at this point.
Had Tulsi had people on her side at Google
headquarters, this suspension might have never
taken place or would have been much shorter.
Google is refusing to give answers to crucial
What variables triggered the automated system
to suspend her account?
Was it flagged by the algorithm and then suspended
Or was the decision made by the algorithm
What unusual activity led to the algorithm
flagging Tulsi’s Ads account?
Spending significantly more on ads on Google
after she became the most searched candidate
could only be expected as the most rational
move a presidential candidate could make.
Definitely not an unusual activity.
Everybody’s strategy would be to capitalize
on the search traffic.
It’s very difficult to understand the reasoning
behind suspending her account under these circumstances.
This practice of unaccountable moderation
is an industry standard across all major social
media platforms [3 a,b].
Routine censorship raids on social media gave
the right the argument to accuse Silicon Valley of liberal bias. 
Whatever the case is, the presence of bias is undeniable.
Algorithms are mathematical, but they can
only learn from people.
A good step forward would to be admit the
bias exists and open up the source code of
the machine-learning algorithms, so that we
can study these biases in real time as they arise.
Secret development of artificial intelligence
by unaccountable tech corporations is a recipe
for dystopian control of the information flow
and monopolization of Internet markets.
Tulsi Gabbard learned this the hard way.