Universal Basic Income is gaining support from many, including high-tech execs. With its known flaws, will UBI be a potential solution to the mass loss of jobs to AI?

tl;dr

As AI improves, the majority of people born today will become unemployable before their traditional retirement age of 65. It’s unlikely the traditional social welfare will be sufficient and sustainable to support the majority of the population who are unable to produce much traditional value.

UBI, as first floated by thinkers like Thomas Paine, has resurfaced as a potential solution to the AI-caused-job-apocalypse. …


Singularity, the point where AI can improve itself faster than humans can, had been a topic since the 60s. I.J. Good speculated in 1965 that “an ultra-intelligent machine...that can far surpass all the intellectual activities of any man however clever”.

James Barrat’s bestseller Our Final Invention brought this topic from academic and nerds to mainstream concerns (or, shall I say fear)?

Image by Author

The great late Stephen Hawking and ever-controversial Elon Musk are some of the most vocal on AI potentially is the worst thing that can happen to humanity in the history of our civilization. …


Great leaders, like everyone else, aren’t always right, what sets them apart though, is that they recognize and make conscious efforts to compensate for it.

In this article, I’ll deviate from my common theme a little bit — I’ll discuss leadership in human leaders, instead of Artificial Intelligence. After all, part of me is still an MBA, without diminishing the importance of my Data Science degree. Nevertheless, the urgency of openly discussing this topic is somewhat driven by the success of Machine Learning — how the consistent-self-adjustment based on “data” helped Machine Learning to achieve better performance than humans in many intellectual tasks.

In a relentlessly self-improving business world, there are unlimited amounts of materials, training, articles, workshops regarding “how to better persuade”, “being more…


Trigger warning: Boring index 9/10. This is not an article about an exciting technology breakthrough, instead, it is an AI philosophy essay refining the 4-decades old Chinese Room thought-experiment — which if works according to its original design, one who communicates intelligently shall not understand a thing they read or write…

Background

In 1980, Mr. John Searle who teaches at UC Berkeley, (where I am getting my Masters in Data Science from), published the famous Chinese Room Thought Experiment.

Mr. Searle is a philosopher, he does not claim to be an AI researcher. Nevertheless, the Chinese Room was one of…


As Deep Learning, particularly CNN in image processing gaining its success and fame, it also garnered some serious concerns and criticism. Some of which were from *the* pioneers in this field.

Yan LeCun, Yoshua Bengio also made similar comments about Deep Learning’s limitations, or even “the end” of it.

Two of the main challenges that have been pointed out by many pioneers and researchers are: 1) it takes tens of thousands if not millions of pictures to train Deep Learning image classification, yet it still does not generalize very well; 2) Deep Learning does not have its own model of…


We are getting into somewhat philosophical discussion, so, we may need to clarify a minimum agreeable idea of “what are intellect and intelligence” — they could mean many different things to different people — however, shared commonalities of these different definitions generally consider intelligence as the mental ability of processing information in a broader sense — including abstract concepts and emotional, motivational, inspirational feelings; intellect focus more on the rational side of intelligence where facts and logical processing are considered — “the faculty of reasoning and understanding objectively, especially with regard to abstract or academic matters.”

It’s important that the…


This is the first of a trilogy discussing cross-intelligent-species value.

This topic is important as we continue our efforts on two fronts: SETI (Search for extraterrestrial intelligence) and Artificial Intelligence. Either one making significant progress would result in an inevitable negotiation of how pan-intelligence relationship should be developed, and what will become the basis for mutual trust and mutually beneficial relationship.

So here is our humble start.

Why do we care about human values? How do we measure (or compare) human value? How much do we agree on a definition of human values?

It turns out we do not have…

Alan Tan

Pan-intelligence futurist — Don’t fear AI as in “Artificial Intelligence” in machines; Be fearful of AI as in ‘Absence of Intelligence’ in humans.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store