Blog

ProjectBlog: ARTIFICIAL INTELLIGENCE: COMPUTING EVOLVES

Blog: ARTIFICIAL INTELLIGENCE: COMPUTING EVOLVES


The human brain is a marvellous central processing unit. Even as babies, without life experiences, we learn quickly and efficiently.

Go to the profile of Dammnn

In contrast, computers have historically learned only through brute force and repetition. Sure, they can be programmed to do wonderful things. But to really learn and adapt, they require an amazing amount of raw processing power.

As the cost of processing power has plunged, scientists have finally discovered how to write and implement artificial intelligence routines that teach computers to learn how to solve problems using human-like logic. And successes have been impressive.

In this chapter, I will delve into what is possible as computers begin to learn to think like humans. I will show how advances in sheer computing power, coupled with AI, mean that solutions to many of our biggest problems as a society are within reach — and conversely, why AI may create a batch of new problems. Then I will make the investment case for two companies that have established clear leads in this important new field.

AI In The Beginning

For several decades, artificial intelligence was a pipe dream left to academics. The very idea that computers could learn the same way humans do seem outside the realm of possibility. Computers lacked the raw processing power and sufficient data sets to test the validity of their theories.

The cloud, sensors, and big data revived AI from the land of sci-fi. Now it is everywhere.

AI is actually not one thing. It is a broad range of computer science disciplines focused on helping computers learn independently. Because computers can process vast amounts of data quickly, they are very good at certain tasks like pattern recognition. This mode of AI is typically called machine learning.

Humans are able to process many different things at once. We can drive a car because we simultaneously acquire and process information through our eyes and ears while understanding what that information means and how it changes over time. Raindrops or bugs are distinctly different from stones and birds. Our eyes adapt effortlessly from the bright sunshine of a mountain road to the sudden darkness of a tunnel.

Before enhancements provided by AI, computers could only perform a compound task when distinctly programmed with a set of specific binary, or yes/no, instructions.

AI simulates human logic by processing multiple information feeds in parallel using neural networks, which are special algorithms inspired by human biology. They mimic the way our brains and central nervous system process information and learn.

AI is far from perfect, but science is improving dramatically. Powerful computers help AI neural networks run countless simulations, untethered to human frailty, to escape the bounds of linear thinking.

Google developed a set of AI neural networks to translate languages. In one case, the networks were taught to translate between Polish and Korean. After much trial and error, the nets developed their own language based on common pairs. Then the networks on their own solved the tricky translation problem, and they did it with increasing speed and accuracy.

Initially, engineers were perplexed. They could not determine how the networks accomplished this feat. Later, they concluded the AI had devised a brand-new language, or Interlingua, to make sense of language pairs. Once it had the cypher, the rest was a snap.

Now teams of engineers at companies all over the world are delving into all facets of the field: Computer vision, machine learning, image recognition or deep learning, and speech recognition AI are making great strides.

For investors, the opportunities are endless. Software engineers are on the cusp of solving some of the biggest problems in science — and potentially creating new ones.

Wars Involving AI

Imagine swarms of tiny drones. Now, imagine them armed with lethal weapons and AI: tiny, flying killers.

It seems like bad science fiction. It’s not.

The technology already exists, reports Alvin Wilby, an executive at Thales UK, a major European defence contractor.

In November 2017, Wilby told the Lords Artificial Intelligence Committee, a British government panel, that it’s just a matter of time before terrorists unleash “swarms” of small, lethal smart-drones.

And other people who should know are worried, too.

Like Noel Sharkey, emeritus professor of artificial intelligence and robotics at the University of Sheffield, who told the Lords Artificial Intelligence Committee that he fears untested, makeshift technology could end up in the hands of terrorists, like ISIS.

At a web conference in Portugal, in November 2017, the late Stephen Hawking, a renowned theoretical physicist, was even more downbeat. He warned that the development of full artificial intelligence could spell the end of human civilization.

Elon Musk, the tech visionary behind Tesla and SpaceX, was more succinct. He declared that AI is “our biggest existential threat.”

It’s easy to dismiss such worries. However, we live in an era when information technology is progressing at an exponential rate. Amazon, Intel, and Qualcomm have already demonstrated drones capable of autonomous navigation and targeting.

Scaling up the technology to swarms would not require “any inventive step,” says Sharkey.

In October 2017, the US Air Force demonstrated how a swarm of autonomous drones might be used for surveillance. Three F-18 Super Hornets released 103 drones with only a 12-inch wingspan. The diminutive Perdix devices shared a single software brain that was programmed with one purpose — to avoid radar installations. In the test, the drones behaved as one unit. There was no leader. As in nature, the swarm adapted to lost members.

The Pentagon started working on Perdix in 2011. The idea for small, swarming drones came from a group of aeronautics students at MIT. In September 2014, the project had its first test flight at the Edwards Air Force Base. A year later, 90 Perdix drones were tested for military surveillance in Alaska. And in October 2016, an F/A-18 Super Hornet dropped a swarm of Perdix drones over a test field at China Lake, California.

Larger weaponized drones with AI brains had become a mainstay in modern warfare.

In May 2016, the US Navy showed off a drone system that is launched into the sky like a missile.

Once airborne, Locust drones, loaded with micro explosives, form packs. Then the group communicates via custom radar. Collectively, they locate targets, then dive bomb, kamikaze-style.

Slaughterbots is a short film that melds the advances made by the military and private business into a single dystopian nightmare. The film’s hook: “Watch what happens when the weapons make the decisions.”

It’s as scary as it seems.

However, all of the technology to make it happen exists today, and it is progressing at a rapid rate. Smartphones gave us micro-electro-mechanical systems (MEMs). Electric components, like accelerometers, GPS, and solid-state compasses, are finally small and inexpensive enough.

And because billions of people are carrying smart devices everywhere, there is enough data to finally fuel the AI revolution scientists promised decades ago. Algorithms are getting better every minute as they chew up and digest the deluge of data.

Along the way, investors have accumulated massive profits. With careful selection, the next step might be even more lucrative.

Artificial intelligence is creating distinct winners and losers. It is concentrating power among those firms that bet early and wisely. They have economies of scale. They also have the power to erect unique barriers to entry.

Hopefully, the drones of Slaughterbots will remain fiction. But their very possibility creates a bonanza for a few defence contractors that are focused on aircraft and missiles, like Northrop Grumman (NOC), Raytheon (RTN), and Lockheed Martin (LMT). Well-placed fears mean AI is likely to be a major new force magnifier in the industry.

The number of conglomerates capable of meeting this challenge is tiny.

Those companies and their supplier ecosystems are going to win an enormous prize when contracts get doled out in the heat of the paranoia.

Remember, these are government contracts. Companies need to meet stringent security guidelines.

I have been recommending defence contractors with major AI programs for years. This is a really big trend. It is going to happen, and investors need to begin taking positions now.

The genies are out of the bottle. And they’ve got self-guided bazookas over their shoulders.

Don’t look now, but video surveillance is right behind you. It was inevitable. The willing surrender of privacy, the fear of bad actors, and the advent of advanced AI make a potent combination.

In mid-2017, police in Dubai began testing a diminutive self-driving car on city streets. The robotic rig, about the size of a baby buggy, features cutting-edge video gear, networked facial-recognition software, and an aerial drone in case undesirables go off-road.

Boosted by AI, video surveillance has become a service. And it is about to explode.

According to Markets and Markets, a global research consulting group, the appetite for Video Surveillance-as-a-Service (VSaaS) will grow from $30.37 billion in 2016 to $75.64 billion in 2022, a compound annual growth rate of 15.6 per cent.

VSaaS providers and their component suppliers are poised to clean up.

Demand is surging thanks to the perception of rising crime rates, increased terror attacks, and the numb acceptance of video surveillance. Meanwhile, the cost of camera sensors, network storage, and computing power is plummeting.

Then there is automation. Video surveillance used to be labour intensive. Humans monitored video screens 24/7, though they sometimes nod off. They are being largely replaced by AI algorithms capable of recognizing faces and detecting movement, even in the dark.

And what public cameras don’t capture, state-owned bots crawling pervasive social media do.

In Dubai, initial ambitions are much lower. Wired reported the emirate contracted with New Zealand’s Martin Aircraft Co. to equip firefighters with jetpacks. This policing robot gambit seems to fit with the surveillance AI narrative. It’s cool tech for a city or state that wants to be on the cutting edge.

The machines are being built by OTSAW Digital, a Singapore company. In a press release, its chairman, Ling Ting Ming, explained the goal is more about using robots to augment policing, rather than to track humans.

“Robots exist to improve the quality of human lives,” Ling says.

Happy talk aside, I’m optimistic because new technologies normally lead to important new industries and to new business models, like VSaaS.

Despite the enormous potential market, the rise of VSaaS is something barely on investors’ radar. While video surveillance in North America will not reach Chinese penetration any time soon, casual observation at airports or crowded public places like stadiums shows that the number of cameras is growing.

In the current environment of terror and travel bans, this development will grow dramatically.

However, navigating is important. Video surveillance hardware is a fragmented marketplace. The market for VSaaS software is even more complicated.

A tiny Chicago start‑up is having a huge impact on insurance.

Lemonade Insurance Co. wanted to disrupt the way insurance companies do business. So it replaced brokers and paperwork with bots, machine learning, artificial intelligence, and a simple smartphone app.

Then the magic happened.

Policies are created in 90 seconds. Most claim payouts take just three minutes. The industry noticed. Now business models are changing everywhere.

It was really just a matter of time before a sea change arrived. Insurance has not changed materially since 13,000 homes were lost in the Great Fire of London in 1666. This change is coming at the hands of two serial entrepreneurs bent on doing social good.

If the horn-rimmed glasses, grey T‑shirt, and jeans are not a giveaway, CEO Daniel Schreiber is on the idealistic side of entrepreneurship. Lemonade sells a low-cost rental and household insurance on monthly subscriptions. It scrapes a small fee and stows the rest in case claims arise.

In most years, says Schreiber, there will be money left over. That excess is then donated to the charity of the policyholder’s choice at the end of the year. Lemonade calls this process Giveback. The idea is to use money that would otherwise profit for social good.

Also, in theory, removing the profit motive eliminates the inherent conflict of interest that insurers face when negotiating claim payouts.

It also gently discourages policyholders from embellishing claims.

None of this would be possible without cutting-edge technology. Lemonade is built on a foundation of artificially intelligent bots and a lot of machine learning cranking away in the background.

The software determines everything. What the policy should cost. What the payout should be.

Lemonade calls the results instant everything: coverage in seconds and claim payouts in minutes. Lemonade once settled a claim for a stolen Canada Goose jacket in only three seconds.

That is causing havoc in the industry. Time is money.

So stodgy insurance companies are stepping up their own technology game. Many are using customer-generated photos. Others are using AI-enhanced drones to survey claims that would otherwise require a special adjuster.

Liberty Mutual routinely sends operators with drones to survey damaged roofs. The devices save time and the expense and danger of sending someone up a ladder. The Wall Street Journal reports 40 per cent of American auto insurers no longer use human adjusters in many cases.

Lemonade software asks policyholders to snap a photo and, in some cases, record video testimony of the damage and the incident. From there, it runs 18 separate anti-fraud algorithms to apply artificial intelligence to the hunt for deception.

Most cases are settled without human input. It’s possible because smartphones are equipped with great cameras that transmit metadata by default.

Claims processed by AI algorithms can take two to three days at the longest. Those handled by humans usually take 10 to 15 days.

That means significant savings that could stretch into billions of dollars. S&P Global Market Intelligence says investigating claims accounts for 11 per cent of every dollar of premium collected.

There is also the problem of leakage. That is the difference between what insurers ultimately shell out and what the claim should have cost. A 2010 Booz Allen study showed costs increase directly with the life of an open claim.

And every dollar lost negatively impacts the bottom line.

The opportunity for investors is huge. Insurance companies that adapt quickly will see dramatic cost savings and fatter profits. New actuarial models are certain to evolve. Business models and corporate structures will follow.

Zendrive makes a smartphone app that uses machine learning algorithms and pattern analysis to make actionable safety insights both for individual vehicle owners and corporate vehicle fleet managers. It works by collecting data using the sensors in a smartphone. The algorithms understand if the driver is speeding, driving aggressively, has become distracted, or is using the phone. It then provides real-time analytics that is sent back to the insurer. Good drivers get lower rates.

Since the Silicon Valley company launched in 2013, it has logged more than 75 million miles of data. It has also attracted the likes of BMW as an investor, and General Re as an insurance partner.

And the barriers to entry are nothing more than a simple smartphone, something owned by around 90 per cent of US adults.

Another firm, Cape Analytics, is using AI-enhanced computer vision to make accurate property assessments for insurers without the need to send out an adjuster.

The company operates on the premise that the best way to write a policy is to have accurate information from the beginning. Cape uses satellite imagery and machine learning to build a robust database of key attributes. When customers apply for insurance, the key data has already been collected, leading to faster approval times and lower costs.

And everything is stored in the cloud, based on real-time data.

Nvidia Bets Big on “Deep Learning”

From 2014 to 2017, shares of semiconductor maker Nvidia (NVDA) advanced 1,167 per cent. While that gain is crazy, investors may still be underappreciating the scale of the opportunity of this premier AI firm going forward.

The essence of Nvidia and its hold on the future of computing is based on artificially intelligent software. That is a long way from where it began: designing cutting edge chips, the hardware brains in computers.

Long ago, the company committed to data science that helps computers to see, think, and learn like humans. Deep learning, a type of AI based on graphics processing units, has been embraced by computer scientists. That rapid adoption by cutting-edge customers is driving Nvidia’s bottom line.

Humans make snap decisions based on experience. If we’re travelling on a freeway and a bug is hurtling toward the windshield, well, it’s a bad day to be a bug. Until recently, computers had to stop, process the threat posed by the bug, and then decide what action to take. It was complicated.

That’s because conventional computer architecture is sequential. Deep learning is based on a new model where billions of software neurons and trillions of connections run in parallel, in networks.

In 2011, Alphabet’s secretive Google Brain learned to identify cats and people by endlessly watching cat videos on YouTube. That seemingly simple feat required 2,000 central processing units and Google’s vast data centre network. Later, Stanford University managed to replicate this by using deep learning and just 12 Nvidia accelerated GPUs. By 2015, researchers at Google and Microsoft (MSFT) used deep learning AI to beat humans in image recognition.

“By collaborating with AI developers, we continued to improve our GPU designs, system architecture, compilers, and algorithms, and sped up training deep neural networks by 50x in just three years — a much faster pace than Moore’s Law,” wrote Jen-Hsun Huang, Nvidia’s founder and chief executive.

Huang is comfortable with uncommon choices. Wearing a leather jacket at corporate functions and flashing tattoos, his company cut its teeth two decades ago making high-end graphics cards, the PC hardware that turns code into images.

But it’s come a long way since then. Its clientele, mostly gamers, demanded photorealistic imagery. So, Huang pushed the company to invest heavily in developing better software modelling.

And then . . . it clicked. Nvidia was sitting on a completely new method of computing.

It used artificial intelligence to combine traditional instruction processing from CPUs with data processing from graphics processing units.

This result did not come cheap. The New York Times reports that Nvidia has spent $10 billion developing its GPU computing platform.

Given the initial size of the company, that was a huge bet.

The impact of deep learning has been enormous. Researchers in healthcare, life sciences, energy, financial services, manufacturing, entertainment, and automotive are innovating at a frenetic pace.

Tesla (TSLA) in 2017 showed a self-driving vehicle equipped with Nvidia Drive PX hardware. The car successfully navigated busy residential streets, winding country roads, and the interstate before parallel parking in front of the corporate storefront. Daimler, Audi, and others are using Nvidia neural networks to advance their self-driving platforms, too.

The Drive PX Pegasus, the latest AI computer from Nvidia, can process 320 trillion operations per second. That’s enough horsepower to deal with cameras, LiDAR, ultrasound, and any other sensor data required for full autonomy. And it fits inside a container the size of a lunchbox.

It’s no wonder taxi, trucking, and logistics companies are clamouring to get their hands on it.

Meanwhile, Nvidia is pushing forward with new products and new markets.

In December 2017, the company revealed the Titan V. It’s a $3,000 graphics card powerhouse. Yet it is not meant for graphics. Its purpose is to extend Nvidia’s GPU computing platform to the next generation of workstations.

Investors should take note of this kind of forward thinking. This is what great companies do.

Today, GPUs are standard fare in the field of AI. From university researchers to bitcoin miners, smart coders are using the platform to push the limits of learning. In the process, Nvidia has attempted to break free of the cyclical nature of the semiconductor business.

The company put itself in the business of solving big problems by using AI.

Fortune named Huang as its Businessperson of the Year for 2017. That’s cool . . . but a decade late.

Gartner, the global IT consulting firm, predicts migration to the cloud is a $1 trillion opportunity by 2020. Public cloud companies Amazon Web Services, Microsoft Azure, Google Cloud, Baidu, Oracle, Alibaba, and Tencent are all investing heavily in AI.

They see it as a value-added service — a way to entice corporate customers. And they want to make sure they are covering all bases. So in addition to their own CPU-based AI frameworks, each enthusiastically supports Nvidia’s GPU.

To put the momentum of its data centre business in perspective, sales were $1.93 billion in fiscal 2018, up 3x in just three years.

Titan V brings the same components found in Nvidia’s $10,000 data centre computer cards to the desktop. This means 5,120 compute cores, 640 machine-learning cores, 21 billion transistors, and the Volta GPU architecture.

All of this adds up to a monumental performance leap over everything in the marketplace.

It will allow researchers and developers to build AI software models right at their desk. More importantly, it extends GPUs into more applications.

And that will help Nvidia sell more hardware. It is a virtuous circle.

I have been directing investors to buy Nvidia shares for years. The attraction was not AI originally. Rather, it was smart management. And sure enough, the company leveraged its graphics expertise into an entirely new way to solve complex problems. When the advantage was apparent, it bet big.

This is the attribute investors should seek. Great companies are focused on. They leverage talents. When it’s clear they have a competitive advantage, they strike and chew up the competition.

Alphabet’s Long Bet on AI

Long before Nvidia’s moonshot, and two decades before the company changed its name to Alphabet, Google was the quintessential artificial intelligence company.

It revolutionized Internet search with machine learning way back in January 1994. It hired the brightest minds in the field. It even began fiddling with a code for self-driving cars as early as 2009.

Aside from becoming one of the biggest Internet businesses on the planet, its prowess in the field of AI didn’t really get a lot of attention. Then, in January 2014, the company acquired a British AI company called DeepMind Technologies.

Demis Hassabis, its charismatic founder, had been teaching computers how to play video games as well as humans. Using a set of custom algorithms, and a Neural Turing machine, an external computing device that mimics human short-term memory, he was making tremendous progress.

That caught the attention of the Google founders.

Now DeepMind says its newest neural networks no longer require human input.

This clandestine Alphabet unit has been at the vanguard of AI research for a while. But this latest development is groundbreaking.

And the implications for everything from drug discovery to materials design are huge.

At first, glance, what DeepMind is doing might seem trivial. Its AI efforts have been largely restrained to what looks like highbrow parlour tricks. In May 2017, AlphaGo, its strategy game-playing neural networks, became the first AI program to beat a human Go, player. And not just any player — Lee Sedol was an 18-time Go world champion.

Go is a Chinese board game for two players developed 2,500 years ago. It involves opposing stones placed on a 19×19 grid. The objective is to surround more territory than your opponent.

The AlphaGo victory made headlines. Go is infinitely more complex than chess. It also stretches the limits of human general intelligence. Previously, it was something researchers found difficult to replicate with machines.

AlphaGo was the product of months of human training and countless computational hours. When Sedol was defeated 4-to-1, it validated unproven AI theories. It meant current artificial general intelligence structures were far enough along to replicate high-level human intelligence.

And now, DeepMind’s AlphaGo Zero takes this to another level. Researchers presented Zero with the game rules, a board, and game piece markers only. There were zero human trainers, zero strategy lessons.

The AI learned Go in 72 hours by playing against itself in 4.9 million simulations. To improve, it had to continuously rethink the algorithms it was generating.

Then researchers matched Zero with AlphaGo. It was ugly. Zero slaughtered AlphaGo, 100-to-0.

It accomplished this with one neural network, four processors, and no human helpers. It invented super-successful strategies that humans had never considered. By contrast, AlphaGo needed two networks, 48 processors, and months of human coaching.

Some smart people have made pointed observations about the need to govern the speed of AI development. Bill Gates, Elon Musk, Stephen Hawking, and others fear military applications. You can imagine the harm a weaponized AI machine might cause if its sole purpose was to kill humans. Now, imagine the same machine endlessly refining its skills at an exponential rate.

Dictators understand the military opportunity. Vladimir Putin, the president of Russia, recently said the country that dominates AI will be the leader of the world.

It’s enough to give the Terminator goosebumps.

On the other hand, AI machines unencumbered by human limitations should be capable of astonishing feats. The ability to run endless simulations is only limited by computer power, and that is progressing exponentially. New, ultra-efficient AI chipsets are operational. Next-generation hardware is in development.

It’s a brave new world. Almost anything is possible.

Nvidia has an entire division devoted to designing AI architectures for the pharmaceutical industry. Researchers are using deep learning to understand vast amounts of bioscience data. They are developing personalized medicine and attacking Parkinson’s, Alzheimer’s, and cancer.

DeepMind is using its networks to better understand quantum chemistry. It’s early, but Hassabis dreams of finding a room-temperature superconductor that would revolutionize battery development.

Investors need to understand the landscape has changed. They need to be aware some sectors will be disrupted, while others will thrive.

New Google DeepMind Technology Has a Mind of Its Own

Alphabet is in a unique position. It is the premier artificial intelligence company in the world, hands down. Everything the company does revolve around AI, and the data it needs to feed hungry algorithms.

Larry Page, the co-founder of Alphabet, tells an interesting story about how the fledgeling Google fell into Internet advertising. He and Sergey Brin, a brilliant computer scientist, were working in the same office at Stanford University. Page had an idea. In the spring of 1996, the Internet was blossoming and he thought to map the link structure of Internet pages, and their relationship might be an interesting endeavour.

In March 1996, Page launched BackRub, an army of search bots with the task of determining web page back-links. These spiders endlessly crawled the web, cataloguing links based on citations. As the project became more complex, Brin was drawn in. He was the same age as Page, but two years ahead academically because he completed his undergrad degree at age 19.

The project grew. It became PageRank. Page and Brin worked tirelessly, developing new math to solve emerging problems. Brin explained that PageRank basically made the entire Internet into a math equation with several hundred million variables. Unwittingly, Brin and Page had developed the best search engine available. What made it so was relevancy and its recursive underpinnings. It got better and better with more data — an AI virtuous circle.

In the late 1990s, the dot-com boom was in full bloom. Both Yahoo and Excite, another popular search engine at the time, was born at Stanford. Page and Brin tried to sell the PageRank technology to Excite for the modest sum of $1.6 million and Excite stock. The offer was rejected. Excite saw itself as a portal, a destination. It scoffed at the idea that search could be an important part of the business.

So Brin and Page decided to go it alone. Armed with $100,000 in seed capital from the founder of Sun Microsystems, himself a Stanford alumnus, the company became Google, a name taken from the intentional misspelling of googol, the number one followed by 1,000 zeros.

By 1999, Google was performing seven million searches per day. There was no promotion. No advertising budget. And initial plans to license its search technology to Internet portals and corporate websites met with limited success. In order to fund the growth of the business, and further machine learning, Brin and Page reluctantly developed an advertising business model.

At the time, advertising was the only way to exploit machine learning.

Ultimately, a licensing deal with Yahoo gave Google the data it needed to perfect its algorithm. Searches grew to 100 million per day.

By 2002, advertising on Google got an upgrade. In addition to paying per click and placement, advertisers got the opportunity to bid against competitors’ ads. This introduced relevancy. It also meant advertisers might end up paying less per ad, as long as they were more relevant to the searcher, and clicks increased.

The redesign was an immediate hit. In the first year, the company did $440 million in sales and $100 million in profits. It was also a new source of important data.

That has been the business plan ever since. Alphabet fits businesses around its AI exploits. It started with Search. Then came YouTube and Gmail. When the iPhone arrived, executives immediately understood smartphones would become a data goldmine. In the spring of 2005, Google purchased Android, a competing smartphone platform being developed by Andy Rubin.

Rubin promised Android would have the flexibility of Linux and the global reach of Windows.

Through mid-2018, Android commanded 82 per cent of worldwide market share and is a fountain of data for Alphabet engineers. And they have other irons in the fire as well.

The company is pushing the limits of AI with self-driving cars, biotechnology, home automation, and connectivity. For Alphabet, the attraction is data to feed its algorithms. The business purpose will come later. Sales growth in 2017 topped 23.7 per cent, exceedingly fast for a company of its size.

Sales and profits are accelerating as the company leverages its dominant digital platforms into other parts of the economy with AI. For many, Alphabet is an advertising business. Judging by sales and success, it certainly appears that way. It is the dominant digital advertising platform in the world. This misses the point. Advertising is merely a tool. It is the application of machine learning.

Alphabet has designs on changing the entire world by understanding the relationships between data and real-world events. And the company has more data and more engineers than any other business.

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social