Categories
knowledge connexions

Artificial intelligence in Healthcare

Artificial intelligence (ai) is improving healthcare by reducing errors and saving lives. AI was valued at $600 million in 2014 and is projected to reach $150 billion by 2026.

AI applications in healthcare include finding new links between genetic codes or to drive surgery-assisting robots.

How does AI help healthcare?

David B Agus, MD, professor of Medicine and Engineering at the university of Southern California believes Artificial Intelligence (AI) is already here and it’s fundamentally changing medicine. Machine learning allows computers to “learn with incoming data and identify patterns and make decisions with minimal human direction,” he added.

Which is the best application of AI in the healthcare?

Pathai is developing machine learning technology to assist pathologists in making more accurate diagnoses. The company has worked with drug developers like Bristol-Myers Squibb and organizations like the Bill Melinda Gates Foundation to expand its AI technology into other healthcare industries.

Categories
knowledge connexions

Artificial intelligence in Healthcare

Artificial intelligence (ai) is improving healthcare by reducing errors and saving lives. AI was valued at $600 million in 2014 and is projected to reach $150 billion by 2026.

AI applications in healthcare include finding new links between genetic codes or to drive surgery-assisting robots.

How does AI help healthcare?

David B Agus, MD, professor of Medicine and Engineering at the university of Southern California believes Artificial Intelligence (AI) is already here and it’s fundamentally changing medicine. Machine learning allows computers to “learn with incoming data and identify patterns and make decisions with minimal human direction,” he added.

Which is the best application of AI in the healthcare?

Pathai is developing machine learning technology to assist pathologists in making more accurate diagnoses. The company has worked with drug developers like Bristol-Myers Squibb and organizations like the Bill Melinda Gates Foundation to expand its AI technology into other healthcare industries.

Categories
Uncategorized

Service

Categories
Uncategorized

SkiResort

Categories
knowledge connexions

Run:AI takes your AI and runs it, on the super-fast software stack of the future

Startup Run:AI exits stealth, promises a software layer to abstract over many AI chips

It’s no secret that machine learning in its various forms, most prominently deep learning, is taking the world by storm. Some side effects of this include the proliferation of software libraries for training machine learning algorithms, as well as specialized AI chips to run those demanding workloads.

The time and cost of training new models are the biggest barriers to creating new AI solutions and bringing them quickly to market. Experimentation is needed to produce good models, and slightly-modified training workloads could be run hundreds of times before they’re accurate enough to use. This results in very long times-to-delivery, as workflow complexities and costs grow.

Today Tel Aviv startup Run:AI exits stealth mode, with the announcement of $13 million in funding for what sounds like an unorthodox solution: rather than offering another AI chip, Run:AI offers a software layer to speed up machine learning workload execution, on premise and in the cloud.

The company works closely with AWS, and is a VMware technology partner. Its core value proposition is to act as a management platform to bridge the gap between the different AI workloads and the various hardware chips, and run a really efficient and fast AI computing platform.

AI chip virtualization

When we first heard about it, we were skeptical. A software layer that sits on top of hardware sounds a lot like virtualization. Is virtualization really a good idea when it’s all about being as close to the metal as possible to squeeze as much performance out of AI chips as possible? This is what Omri Geller, Run:AI co-founder and CEO thinks:

“Traditional computing uses virtualization to help many users or processes share one physical resource efficiently; virtualization tries to be generous. But a deep learning workload is essentially selfish since it requires the opposite:

It needs the full computing power of multiple physical resources for a single workload, without holding anything back. Traditional computing software just can’t satisfy the resource requirements for deep learning workloads.”

abstraction-layer-runai.png

Run:AI works as an abstraction layer on top of hardware running AI workloads

So, even though this sounds like virtualization, it’s a different kind of virtualization. Run:AI claims to have completely rebuilt the software stack for deep learning to get past the limits of traditional computing, making training massively faster, cheaper and more efficient.

Still, AI chip manufacturers have their own software stacks, too. Presumably, they know their own hardware better. Why would someone choose to use a 3rd party software layer like Run:AI? And what AI chips does Run:AI support?

Geller noted that there is diversity in hardware for AI that is currently available and will become available in the next few years. Currently in production the Run:AI platform supports Nvidia GPUs, while Geller said that Google’s TPUs will be supported in next releases. He went on to add that other deep learning dedicated chips will be supported as well once they are ready and available for general use. But that’s not all.

Machine learning workload diversity and the need for a management platform

Geller pointed out that in the new era of AI, diversity comes not only in the various available hardware chips but also in the workloads themselves. AI workloads include Support-Vectors, decision tree algorithms, fully connected neural networks, Convolutional Neural Networks (CNNs), long-short-term memory (LSTM) and others:

“Each algorithm fits a different application (decision trees for recommendation engines, CNNs for image recognition, LSTMs for NLP, and so on). These workloads need to run with different optimizations – different in terms of distribution strategy, on different hardware chips, etc.

A management platform is required to bridge the gap between the different AI workloads and the various hardware chips and run a real efficient and fast AI computing platform. Run:AI’s system runs all an organization’s AI workloads concurrently, and therefore can apply macro-optimizations like allocating resources among the various workloads”.

The state of AI in 2020: Biology and healthcare’s AI moment, ethics, predictions, and graph neural networks

Run:AI uses graph analysis coupled with a unique hardware modeling approach to handle deep learning optimizations and manage a large set of workloads

Geller explained that Run:AI uses graph analysis coupled with a unique hardware modeling approach to handle such optimizations and manage a large set of workloads. This, he said, allows the platform to understand the computational complexity of the workloads, matching the best hardware configuration to each task while taking into account business goals and pre-defined cost and speed policies.

Geller added that Run:AI also automatically distributes computations over multiple compute resources using hybrid data/model parallelism, treating many separate compute resources as though they are a single computer with numerous compute nodes that work in parallel. This approach optimizes compute efficiency and allows you to increase the size of the trainable neural network.

Running machine learning model training workloads, however, is heavily reliant on feeding them with the data they need. In addition, people usually develop their models using TensorFlow, Keras, PyTorch, or one of the many machine learning frameworks around.

So how does this all come together – what do machine learning engineers have to do to run their model on Run:AI, and feed it the data it needs? Importantly, does it also work in the cloud – public and private? Many AI workloads run in the cloud, following data gravity.

Integrating with machine learning frameworks and data storage, on premise and in the cloud

Geller said that one of the core concepts of Run:AI is that the user doesn’t have to change workflows in order to use the system:

“Run:AI supports both private clouds and public clouds such that our solution works in hybrid/multi cloud environments. The company works closely with VMware (technology partner) and with AWS in order to maximize resource utilization and minimize costs.

Run:AI can operate with Docker containers pre-built by the user, containers pre-built by the Run:AI team, or on bare metal. Most of Run:AI optimizations can be applied to any containerized workload running with any framework. The low-level system that parallelizes a single workload to run on multiple resources can be applied to graph-based frameworks, currently supporting TensorFlow and Keras in production and soon PyTorch as well.

Data is streamed to the compute instance either via containerized entry point scripts, or as part of the training code running on bare metal hardware. Data can be stored in any location including cloud storage in public clouds and network file systems in private clouds”.

Again, this made us wonder. As Run:AI claims to work close to the metal, it seemed to us like a different model, conceptually, from the cloud where the idea is to abstract from the hardware, and use a set of distributed nodes for compute and storage. Plus, one of the issues with Docker / Kubernetes at this time is that (permanent & resilient) data storage is complicated.

In most cases, Geller said, data is stored in a cloud storage like AWS S3 and pipelined to the compute instance:

“The data pipeline typically includes a phase of streaming the data from the cloud storage to the compute instance and a preprocessing phase of preparing the data to be fed to the neural net trainer. Performance degradation can occur in any of these phases.

The Run:AI system accounts for data gravity and optimizes the data streaming performance by making sure the compute instance is as near as possible to the data storage. The low-level features of the Run:AI system further analyze the performance of the data pipeline, alerting users on bottlenecks either in the data streaming phase or in the preprocessing step while providing recommendations for improvement”.

Geller added that there is also an option for an advanced users to tweak the results of the Run:AI layer, manually determining the amount of resources and the distribution technique, and the workload would be executed accordingly.

Does Run:AI have legs?

Run:AI’s core value proposition seems to be acting as the management layer above AI chips. Run:AI makes sense as a way of managing workloads efficiently across diverse infrastructure. In a way, Run:AI can help cloud providers and data center operators hedge their bets: rather than putting all their eggs in one AI chip vendor basket, they can have a collection of different chips, and use Run:AI as the management layer to direct workloads where they are most suitable for.

Promising as this may sound, however, it may not be everyone’s cup of tea. If your infrastructure is homogeneous, consisting of a single AI chip, it’s questionable whether Run:AI could deliver superior performance than the chip’s own native stack. We asked whether there are any benchmarks: could Run:AI’s performance be faster than Nvidia, GraphCore, or Habana, for example? It seems at this point there are no benchmarks that can be shared.

omri-geller-and-dr-ronen-dar.jpg

Run:AI founders, Omri Geller and Dr. Ronen Dar. Raun:AI is in private beta with paying customers and working with AWS and VMware. General availability is expected in Q4 2019

Geller, who co-founded Run:AI with Dr. Ronen Dar and Prof. Meir Feder in 2018, said that there are currently several paying customers from the retail, medical, and finance verticals. These customers use Run:AI to speed up their training and simplify their infrastructure.

He went on to add that customers also use the system as an enabler to train big models that they couldn’t train before because the model doesn’t fit into a single GPU memory: “Our parallelization techniques can bypass these limits. Customers are able to improve their model accuracy when accelerating their training processes and training bigger models”.

Run:AI’s business model is based on subscription and the parameters are a combination of the number of users and the number of experiments. The cost depends on the size and volume of the company, Geller said. Currently Run:AI is in private beta, with general availability expected in 6 months.

Content retrieved from: https://www.zdnet.com/article/take-your-ai-and-run-it-on-the-super-fast-software-stack-of-the-future/.

Categories
knowledge connexions

Run:AI takes your AI and runs it, on the super-fast software stack of the future

Startup Run:AI exits stealth, promises a software layer to abstract over many AI chips

It’s no secret that machine learning in its various forms, most prominently deep learning, is taking the world by storm. Some side effects of this include the proliferation of software libraries for training machine learning algorithms, as well as specialized AI chips to run those demanding workloads.

The time and cost of training new models are the biggest barriers to creating new AI solutions and bringing them quickly to market. Experimentation is needed to produce good models, and slightly-modified training workloads could be run hundreds of times before they’re accurate enough to use. This results in very long times-to-delivery, as workflow complexities and costs grow.

Today Tel Aviv startup Run:AI exits stealth mode, with the announcement of $13 million in funding for what sounds like an unorthodox solution: rather than offering another AI chip, Run:AI offers a software layer to speed up machine learning workload execution, on premise and in the cloud.

The company works closely with AWS, and is a VMware technology partner. Its core value proposition is to act as a management platform to bridge the gap between the different AI workloads and the various hardware chips, and run a really efficient and fast AI computing platform.

AI chip virtualization

When we first heard about it, we were skeptical. A software layer that sits on top of hardware sounds a lot like virtualization. Is virtualization really a good idea when it’s all about being as close to the metal as possible to squeeze as much performance out of AI chips as possible? This is what Omri Geller, Run:AI co-founder and CEO thinks:

“Traditional computing uses virtualization to help many users or processes share one physical resource efficiently; virtualization tries to be generous. But a deep learning workload is essentially selfish since it requires the opposite:

It needs the full computing power of multiple physical resources for a single workload, without holding anything back. Traditional computing software just can’t satisfy the resource requirements for deep learning workloads.”

abstraction-layer-runai.png

Run:AI works as an abstraction layer on top of hardware running AI workloads

So, even though this sounds like virtualization, it’s a different kind of virtualization. Run:AI claims to have completely rebuilt the software stack for deep learning to get past the limits of traditional computing, making training massively faster, cheaper and more efficient.

Still, AI chip manufacturers have their own software stacks, too. Presumably, they know their own hardware better. Why would someone choose to use a 3rd party software layer like Run:AI? And what AI chips does Run:AI support?

Geller noted that there is diversity in hardware for AI that is currently available and will become available in the next few years. Currently in production the Run:AI platform supports Nvidia GPUs, while Geller said that Google’s TPUs will be supported in next releases. He went on to add that other deep learning dedicated chips will be supported as well once they are ready and available for general use. But that’s not all.

Machine learning workload diversity and the need for a management platform

Geller pointed out that in the new era of AI, diversity comes not only in the various available hardware chips but also in the workloads themselves. AI workloads include Support-Vectors, decision tree algorithms, fully connected neural networks, Convolutional Neural Networks (CNNs), long-short-term memory (LSTM) and others:

“Each algorithm fits a different application (decision trees for recommendation engines, CNNs for image recognition, LSTMs for NLP, and so on). These workloads need to run with different optimizations – different in terms of distribution strategy, on different hardware chips, etc.

A management platform is required to bridge the gap between the different AI workloads and the various hardware chips and run a real efficient and fast AI computing platform. Run:AI’s system runs all an organization’s AI workloads concurrently, and therefore can apply macro-optimizations like allocating resources among the various workloads”.

The state of AI in 2020: Biology and healthcare’s AI moment, ethics, predictions, and graph neural networks

Run:AI uses graph analysis coupled with a unique hardware modeling approach to handle deep learning optimizations and manage a large set of workloads

Geller explained that Run:AI uses graph analysis coupled with a unique hardware modeling approach to handle such optimizations and manage a large set of workloads. This, he said, allows the platform to understand the computational complexity of the workloads, matching the best hardware configuration to each task while taking into account business goals and pre-defined cost and speed policies.

Geller added that Run:AI also automatically distributes computations over multiple compute resources using hybrid data/model parallelism, treating many separate compute resources as though they are a single computer with numerous compute nodes that work in parallel. This approach optimizes compute efficiency and allows you to increase the size of the trainable neural network.

Running machine learning model training workloads, however, is heavily reliant on feeding them with the data they need. In addition, people usually develop their models using TensorFlow, Keras, PyTorch, or one of the many machine learning frameworks around.

So how does this all come together – what do machine learning engineers have to do to run their model on Run:AI, and feed it the data it needs? Importantly, does it also work in the cloud – public and private? Many AI workloads run in the cloud, following data gravity.

Integrating with machine learning frameworks and data storage, on premise and in the cloud

Geller said that one of the core concepts of Run:AI is that the user doesn’t have to change workflows in order to use the system:

“Run:AI supports both private clouds and public clouds such that our solution works in hybrid/multi cloud environments. The company works closely with VMware (technology partner) and with AWS in order to maximize resource utilization and minimize costs.

Run:AI can operate with Docker containers pre-built by the user, containers pre-built by the Run:AI team, or on bare metal. Most of Run:AI optimizations can be applied to any containerized workload running with any framework. The low-level system that parallelizes a single workload to run on multiple resources can be applied to graph-based frameworks, currently supporting TensorFlow and Keras in production and soon PyTorch as well.

Data is streamed to the compute instance either via containerized entry point scripts, or as part of the training code running on bare metal hardware. Data can be stored in any location including cloud storage in public clouds and network file systems in private clouds”.

Again, this made us wonder. As Run:AI claims to work close to the metal, it seemed to us like a different model, conceptually, from the cloud where the idea is to abstract from the hardware, and use a set of distributed nodes for compute and storage. Plus, one of the issues with Docker / Kubernetes at this time is that (permanent & resilient) data storage is complicated.

In most cases, Geller said, data is stored in a cloud storage like AWS S3 and pipelined to the compute instance:

“The data pipeline typically includes a phase of streaming the data from the cloud storage to the compute instance and a preprocessing phase of preparing the data to be fed to the neural net trainer. Performance degradation can occur in any of these phases.

The Run:AI system accounts for data gravity and optimizes the data streaming performance by making sure the compute instance is as near as possible to the data storage. The low-level features of the Run:AI system further analyze the performance of the data pipeline, alerting users on bottlenecks either in the data streaming phase or in the preprocessing step while providing recommendations for improvement”.

Geller added that there is also an option for an advanced users to tweak the results of the Run:AI layer, manually determining the amount of resources and the distribution technique, and the workload would be executed accordingly.

Does Run:AI have legs?

Run:AI’s core value proposition seems to be acting as the management layer above AI chips. Run:AI makes sense as a way of managing workloads efficiently across diverse infrastructure. In a way, Run:AI can help cloud providers and data center operators hedge their bets: rather than putting all their eggs in one AI chip vendor basket, they can have a collection of different chips, and use Run:AI as the management layer to direct workloads where they are most suitable for.

Promising as this may sound, however, it may not be everyone’s cup of tea. If your infrastructure is homogeneous, consisting of a single AI chip, it’s questionable whether Run:AI could deliver superior performance than the chip’s own native stack. We asked whether there are any benchmarks: could Run:AI’s performance be faster than Nvidia, GraphCore, or Habana, for example? It seems at this point there are no benchmarks that can be shared.

omri-geller-and-dr-ronen-dar.jpg

Run:AI founders, Omri Geller and Dr. Ronen Dar. Raun:AI is in private beta with paying customers and working with AWS and VMware. General availability is expected in Q4 2019

Geller, who co-founded Run:AI with Dr. Ronen Dar and Prof. Meir Feder in 2018, said that there are currently several paying customers from the retail, medical, and finance verticals. These customers use Run:AI to speed up their training and simplify their infrastructure.

He went on to add that customers also use the system as an enabler to train big models that they couldn’t train before because the model doesn’t fit into a single GPU memory: “Our parallelization techniques can bypass these limits. Customers are able to improve their model accuracy when accelerating their training processes and training bigger models”.

Run:AI’s business model is based on subscription and the parameters are a combination of the number of users and the number of experiments. The cost depends on the size and volume of the company, Geller said. Currently Run:AI is in private beta, with general availability expected in 6 months.

Content retrieved from: https://www.zdnet.com/article/take-your-ai-and-run-it-on-the-super-fast-software-stack-of-the-future/.

Categories
knowledge connexions

Graph analytics for the people: No code data migration, visual querying, and free COVID-19 analytics by TigerGraph

Graph databases and analytics are getting ever more accessible and relevant

As we’ve been keeping track of the graph scene for a while now, a couple of things have started becoming apparent. One, graph is here to stay. Two, there’s still some way to go to make the benefits of graph databases and analytics widely available and accessible. Add to this a newly-found timeliness, as leveraging connections is where this technology shines, and you have the backdrop for today’s announcement by TigerGraph.

Graph is here to stay

Even though graph databases have a history that goes back at least 20 years, it’s only the last couple of years that it started getting in the limelight. The realization that the way data points are connected can bring more insights, and value, than sheer data volume seems to have hit home. At the same time, graph technology has been making progress, while the limitations of incumbent relational databases when it comes to leveraging connections are now well understood.

This has lead to a perfect storm for graph databases. Graph databases went from a niche market to the fastest-growing segment in data management in almost no time. Gartner, for example, predicted last year that this space will see compound annual growth of 100% on a year to year basis till 2022. Every single industry executive we’ve spoken to seems to verify this — 2019 has been a very good year indeed.

TigerGraph is no exception. TigerGraph is a relative newcomer in this space, having emerged from stealth in 2017. Before that, however, TigerGraph’s people have been working on their platform since 2012. This is starting to pay off, according to TigerGraph VP Marketing Gaurav Deshpande.

Business woman drawing global structure networking and data exchanges customer connection on dark background

Leveraging connections is where graph databases shine

TigerGraph was one of the first graph database vendors to announce a fully managed cloud service in late 2019. In a call with ZDNet, Deshpande noted that even though the cloud-based version of the platform has only been generally available for a while, it is seeing rapid uptake.

During the past four months alone, TigerGraph notes, more than 1,000 developers have harnessed the power of graph to build applications on top of TigerGraph Cloud, the company’s graph database-as-a-service. This seems to be in line with the overall trend — data, databases, and users, are all going cloud.

Still, this is just one of the pieces of the puzzle graph database vendors will need to solve. Being on offer in the cloud may take care of the availability part, but what about accessibility? Not everyone is an expert in graph to boot with. Even for the ones who are, having some kind of equivalent for the well-established technology stack that comes with incumbent relational databases would help.

Wide availability and accessibility: Cloud, no code, visual tools

This is where TigerGraph’s announcement comes into play. The first part of what TigerGraph dubs version 3.0 of its platform does not seem particularly revolutionary, but we get the feeling it will be appreciated by many: the capability to automatically migrate data from relational databases to TigerGraph, without the need to build a data pipeline or create and map to a new graph schema.

As seen in a demo released by TigerGraph, the migration seems pretty painless indeed. Deshpande commented that this was a feature TigerGraph has been working on for a while, and now the time finally came to release it. Initial customer feedback has been pretty positive, too.

Although TigerGraph is not the only graph database vendor to offer some way of importing data, other options often require an intermediate step, i.e. exporting to CSV format. This adds complexity and cost to the process, as opposed to what seems like a pretty smooth import process for TigerGraph 3.0.

The flip side of this, however, is a lack of transparency and control. At this point, there is no way for users to control the process. This means that built-in rules for mapping and schema creation apply. This may be more of a problem than it seems, especially for complex domains.

The clarity in perception and navigation, as well as performance in querying, are very much dependent on an appropriate graph data model. Depending on your domain, an out-of-the-box graph data model may or may not be appropriate. Of course, it’s a start. As Deshpande pointed out, users can always intervene to fine tune their graph data model using TigerGraph’s visual IDE.

Over time, Deshpande said, the ability to control the process will be added. For the time being, however, users need to be aware of this and be ready to intervene as needed. But that’s not all they may want to use TigerGraph’s visual IDE for. Overall, visual environments are a great boost for developer accessibility and productivity, and graph database vendors have been adding those to their arsenals, too.

TigerGraph 3.0, however, goes one step further. In an industry first, to the best of our knowledge, TigerGraph 3.0 introduces visual querying capabilities for its IDE. In other words: users can now explore their graphs, and formulate and execute queries against the database, without actually learning TigerGraph’s query language or writing code.

This patent=pending capability will probably attract some attention and goes some way into mitigating one of the issues with graph databases. While efforts to produce a universally standardized graph query language are underway, no code querying is an interesting capability in its own right.

Leveraging connections in COVID-19 times

TigerGraph 3.0 introduces more improvements, namely support for distributed environments in its cloud, and user-defined indexing. The former means that graph deployments around the globe can now scale up in a better way, while the latter means that users can speed up the database performance for specific queries.

Last but not least, however, is an initiative that comes at a time when graph analytics could really help society at large. As the spread of the COVID-19 virus has reached a pandemic status, according to the WHO, one of the key aspects of tackling the virus is identifying contacts for every individual who has been tested positive.

This essentially comes down to leveraging connections, as the name of the game is to identify people with whom COVID-19 positive cases have been in touch. The idea is to pinpoint potential upstream sources the virus may have been acquired from while keeping an eye on potential downstream contacts to try and contain further contamination.

This is exactly the type of analytics where graph shines. Mastercard, Bill & Melinda Gates Foundation and Wellcome have launched an initiative to speed development and access to therapies for COVID-19. TigerGraph took note and would like to lend a helping hand for this and all other initiatives aimed at stopping the spread of and improving treatment for coronavirus worldwide.

For this reason, TigerGraph is offering free Cloud and Enterprise Edition use for applications requiring massive data or high computation needs. Local, State and Federal agencies, corporates as well as non-profit can immediately utilize the free tier on TigerGraph Cloud to load data and perform advanced analysis.

Graph algorithms may be of help there. For example, Community Detection can identify clusters of virus infection, PageRank can identify super-spreading events, and Shortest Path may help understand the origin and impact of spread in a particular area or community.

TigerGraph’s own founding team has roots in China, and some of its executives nearly escaped being stranded in Europe due to the recently imposed travel ban. Perhaps this served as motivation for TigerGraph, but in any case, at times like these, everyone should chip in as much as they can.

Content retrieved from: https://www.zdnet.com/article/graph-analytics-for-the-people-no-code-data-migration-visual-querying-and-free-covid-19-analytics-by-tigergraph/.

Categories
knowledge connexions

Graph analytics for the people: No code data migration, visual querying, and free COVID-19 analytics by TigerGraph

Graph databases and analytics are getting ever more accessible and relevant

As we’ve been keeping track of the graph scene for a while now, a couple of things have started becoming apparent. One, graph is here to stay. Two, there’s still some way to go to make the benefits of graph databases and analytics widely available and accessible. Add to this a newly-found timeliness, as leveraging connections is where this technology shines, and you have the backdrop for today’s announcement by TigerGraph.

Graph is here to stay

Even though graph databases have a history that goes back at least 20 years, it’s only the last couple of years that it started getting in the limelight. The realization that the way data points are connected can bring more insights, and value, than sheer data volume seems to have hit home. At the same time, graph technology has been making progress, while the limitations of incumbent relational databases when it comes to leveraging connections are now well understood.

This has lead to a perfect storm for graph databases. Graph databases went from a niche market to the fastest-growing segment in data management in almost no time. Gartner, for example, predicted last year that this space will see compound annual growth of 100% on a year to year basis till 2022. Every single industry executive we’ve spoken to seems to verify this — 2019 has been a very good year indeed.

TigerGraph is no exception. TigerGraph is a relative newcomer in this space, having emerged from stealth in 2017. Before that, however, TigerGraph’s people have been working on their platform since 2012. This is starting to pay off, according to TigerGraph VP Marketing Gaurav Deshpande.

Business woman drawing global structure networking and data exchanges customer connection on dark background

Leveraging connections is where graph databases shine

TigerGraph was one of the first graph database vendors to announce a fully managed cloud service in late 2019. In a call with ZDNet, Deshpande noted that even though the cloud-based version of the platform has only been generally available for a while, it is seeing rapid uptake.

During the past four months alone, TigerGraph notes, more than 1,000 developers have harnessed the power of graph to build applications on top of TigerGraph Cloud, the company’s graph database-as-a-service. This seems to be in line with the overall trend — data, databases, and users, are all going cloud.

Still, this is just one of the pieces of the puzzle graph database vendors will need to solve. Being on offer in the cloud may take care of the availability part, but what about accessibility? Not everyone is an expert in graph to boot with. Even for the ones who are, having some kind of equivalent for the well-established technology stack that comes with incumbent relational databases would help.

Wide availability and accessibility: Cloud, no code, visual tools

This is where TigerGraph’s announcement comes into play. The first part of what TigerGraph dubs version 3.0 of its platform does not seem particularly revolutionary, but we get the feeling it will be appreciated by many: the capability to automatically migrate data from relational databases to TigerGraph, without the need to build a data pipeline or create and map to a new graph schema.

As seen in a demo released by TigerGraph, the migration seems pretty painless indeed. Deshpande commented that this was a feature TigerGraph has been working on for a while, and now the time finally came to release it. Initial customer feedback has been pretty positive, too.

Although TigerGraph is not the only graph database vendor to offer some way of importing data, other options often require an intermediate step, i.e. exporting to CSV format. This adds complexity and cost to the process, as opposed to what seems like a pretty smooth import process for TigerGraph 3.0.

The flip side of this, however, is a lack of transparency and control. At this point, there is no way for users to control the process. This means that built-in rules for mapping and schema creation apply. This may be more of a problem than it seems, especially for complex domains.

The clarity in perception and navigation, as well as performance in querying, are very much dependent on an appropriate graph data model. Depending on your domain, an out-of-the-box graph data model may or may not be appropriate. Of course, it’s a start. As Deshpande pointed out, users can always intervene to fine tune their graph data model using TigerGraph’s visual IDE.

Over time, Deshpande said, the ability to control the process will be added. For the time being, however, users need to be aware of this and be ready to intervene as needed. But that’s not all they may want to use TigerGraph’s visual IDE for. Overall, visual environments are a great boost for developer accessibility and productivity, and graph database vendors have been adding those to their arsenals, too.

TigerGraph 3.0, however, goes one step further. In an industry first, to the best of our knowledge, TigerGraph 3.0 introduces visual querying capabilities for its IDE. In other words: users can now explore their graphs, and formulate and execute queries against the database, without actually learning TigerGraph’s query language or writing code.

This patent=pending capability will probably attract some attention and goes some way into mitigating one of the issues with graph databases. While efforts to produce a universally standardized graph query language are underway, no code querying is an interesting capability in its own right.

Leveraging connections in COVID-19 times

TigerGraph 3.0 introduces more improvements, namely support for distributed environments in its cloud, and user-defined indexing. The former means that graph deployments around the globe can now scale up in a better way, while the latter means that users can speed up the database performance for specific queries.

Last but not least, however, is an initiative that comes at a time when graph analytics could really help society at large. As the spread of the COVID-19 virus has reached a pandemic status, according to the WHO, one of the key aspects of tackling the virus is identifying contacts for every individual who has been tested positive.

This essentially comes down to leveraging connections, as the name of the game is to identify people with whom COVID-19 positive cases have been in touch. The idea is to pinpoint potential upstream sources the virus may have been acquired from while keeping an eye on potential downstream contacts to try and contain further contamination.

This is exactly the type of analytics where graph shines. Mastercard, Bill & Melinda Gates Foundation and Wellcome have launched an initiative to speed development and access to therapies for COVID-19. TigerGraph took note and would like to lend a helping hand for this and all other initiatives aimed at stopping the spread of and improving treatment for coronavirus worldwide.

For this reason, TigerGraph is offering free Cloud and Enterprise Edition use for applications requiring massive data or high computation needs. Local, State and Federal agencies, corporates as well as non-profit can immediately utilize the free tier on TigerGraph Cloud to load data and perform advanced analysis.

Graph algorithms may be of help there. For example, Community Detection can identify clusters of virus infection, PageRank can identify super-spreading events, and Shortest Path may help understand the origin and impact of spread in a particular area or community.

TigerGraph’s own founding team has roots in China, and some of its executives nearly escaped being stranded in Europe due to the recently imposed travel ban. Perhaps this served as motivation for TigerGraph, but in any case, at times like these, everyone should chip in as much as they can.

Content retrieved from: https://www.zdnet.com/article/graph-analytics-for-the-people-no-code-data-migration-visual-querying-and-free-covid-19-analytics-by-tigergraph/.

Categories
knowledge connexions

AI applications, chips, deep tech, and geopolitics in 2019: The stakes have never been higher

The state of AI in 2019 report analysis with report author, AI expert, and venture capitalist Nathan Benaich continues. High-profile applications, funding, and the politics of AI

It’s the time of the season for AI reports. As we noted earlier, the last few days saw the publication of not 1, but three top-notch reports on the state of AI. People working in VCs have authored all of them, and keeping a close eye on all things AI: From technological breakthroughs to implications in the economy and society at large.

Having covered key technological breakthroughs already, we extend the discussion on the implications of AI with Nathan Benaich, co-author of the State of AI Report 2019, Air Street Capital and RAAIS founder. Benaich co-authored the report with AI angel investor and UCL IIPP visiting professor Ian Hogarth.

Benaich and Hogarth have also drawn on the expertise of prominent figures such as Google AI Researcher and the lead of Keras Deep Learning framework François Chollet, VC and AI thought leader Kai-Fu Lee, and Facebook AI Researcher Sebastian Riedel.

AI applications: RPA and autonomous vehicles

Much of the Q&A with Benaich focused on the geopolitics of AI. That’s not to say Benaich and Hogarth’s report does not cover topics such as talent, infrastructure, or applications — it does, extensively. But with such a full plate, one has to pick.

As far as talent is concerned, there is a consensus among experts: AI talent is highly sought after (and rewarded) and investment in training is on the rise. Nonetheless, the talent shortage in AI continues to be a major bottleneck to the broad adoption of the technology across the industry.

One approach to mitigate this is AutoML, that is to say using machine learning to automate an increasing part of the process of applying machine learning, in a sort of recursive fashion. In the report, AutoML is shown to de-novo design neural networks that are better than those designed by humans to run on resource-constrained mobile devices, for example.

The macro picture remains hot. Funds invested in AI grew by almost 80 percent in 2018 compared to 2017, exceeding $27 billion per year, with North America leading the way at 55 percent market share. Some of the application areas this capital has been pouring into as emphasized in the report are robotics (mainly in manufacturing and logistics), RPA (Robotic Process Automation), healthcare, demand forecasting, autonomous vehicles, and text analysis.

RPA, which is not related to robotics, is “an overnight enterprise success, 15 years in the making”, as the report states. Benaich noted that industry adoption of RPA appears to be growing at a clip, mostly as a result of the benefits it delivers to enterprises: Reduced operating costs and increased operational nimbleness to compete with new entrants.

RPA companies saw massive funding rounds: UiPath raised $800M across two rounds in 2018 and one round in 2019, while Automation Anywhere raised $550 million across two rounds in 2018. As mentioned in FirstMark’s report, however, there are reasons to be cynical about RPA: “RPA, at this stage at least, is more about automation than intelligence, more about rules-based solutions than AI.” Benaich agrees.

Another high-profile area of application is autonomous vehicles (AV). As Benaich and Hogarth note, self-driving cars are now a game for multi-billion-dollar balance sheets. They list spending by the likes of Waymo, Uber, Cruise, and Ford to make their case. But despite growth in investment and live AV pilots in California and elsewhere, some players have missed launch dates, while others remain silent.

Benaich and Hogarth point out that while the average Californian drives 14,435 miles per year, only 11/63 companies had driven more than this in 2018. Waymo drove more than one million miles in 2018, nearly three times as much as second best GM Cruise and 16-times as much as third best Apple. As for Tesla — it does not report its disengagement metrics to the California DMV.

Allegedly, however, Tesla has more data than any of the other players, giving it a leg up in the race. Tesla also designs its own AI chip to power the compute needed on board. This is another red hot area for innovation, as it is driving the capabilities of AI. We have covered some of the pioneers in this space, such as Graphcore, Habana, and GreenWaves.

AI chips, deep tech, geopolitics: China’s rapid growth

Benaich believes the timing is right to develop novel chips that are purpose-built for training and inference of AI models:

“We think this is true because of industry adoption of AI models for several large-scale use cases, especially in consumer internet. As a result, chip designers have a clear customer to design for. Designing chips, however, is an endeavor that is very capital intensive and requires significant domain experience that can only be acquired over many many years.”

This is also closely linked to geopolitics, as per Benaich’s reasoning. Companies building this kind of “deep” or “core” sector-agnostic technology comprise a tenth of AI startups, but they punch above their weight, attracting a fifth of venture capital investment:

“When it comes to ‘deep tech’ (for example, semiconductors), the US (along with other key countries like South Korea and the UK) remains dominant. This means that China remains heavily dependent on imports for these kinds of technologies. Indeed, China spends seven-times more money on importing semiconductors than it does selling them for export.”

As Ian Hogarth argued in his AI Nationalism essay, “China will certainly try to close this critical trade deficit, and the $140 billion ‘Big Fund’ demonstrates the commitment the government has to narrow the deficit. We also believe that China’s leading technology companies will ramp up their acquisition of deep tech companies from Europe.”

Flag of China

China is making rapid progress in AI, having more or less caught up with the West

Benaich and Hogarth also include predictions in their report. Amongst their 2018 predictions was a merger/acquisition north of $5 billion that would subsequently to be blocked. While this has yet to materialize, the authors still back their predictions. Benaich pointed out that the Chinese technology ecosystem is growing extremely rapidly:

“Of particular note is the ecosystem’s focus on nurturing the growth of AI-first technology companies. By recent counts, China is home to the largest number of AI startups valued over $1 billion. The pace with which these AI startups acquire scale is arguably second to none in the world.

With regards to fundamental research progress, we can consider a) the number of papers accepted into leading academic research conferences, b) the citation count of these papers, and c) the international ranking of universities for related courses such as computer science and engineering.

Looking at the 1st and 2nd measure, China’s contribution to global AI research output is on an upswing. For the thirrd measure, we can see that US and European universities still account for the overwhelming constituency of the top 20 institutions in global rankings. Having said that, Tsinghua University and Peking University are both in the top 20 for computer science and engineering courses.”

Will Europe, or the UK, be the AI R&D lab of the world?

Benaich said that although China is lagging by some measures, the ecosystem is undoubtedly on an upswing in the right direction with immense resources driving its growth. He also noted there is already a firm decoupling between the consumer internet within China and outside of China: Alibaba, Tencent, and Baidu are orders of magnitude more influential in China than Google, Amazon, or Facebook.

This is why Benaich and Hogarth have dedicated an entire section of their report to China. Another part is dedicated to AI and politics. Since Benaich and Hogarth are both based in London, the UK, Benaich’s take on European and British prospects are of particular interest:

“We are in a period of incredible transformation. The economy is changing. Governance is in flux. And the only way we can tackle our toughest societal challenges is with the help of powerful technologies such as AI — workable, safe, ethical AI. That is where Europe’s unique strengths lie, at the fulcrum between China and America’s AI rivalry.”

its-impossible-to-prepare-brexit-delay-f-5cb0ad0cfe727300bade6fc0-1-apr-16-2019-15-29-11-poster.jpg

Europe’s unique strengths lie at the fulcrum between China and America’s AI rivalry, argues Benaich, who also sees a role for post-Brexit UK

Benaich believes the European technology industry has flourished over the past decade, and a new ecosystem with both sophisticated and sustainable financing is emerging:

“This will have a major impact on Europe and Britain’s AI fortunes for years to come. The context is important. At a time of Brexit and a US-China trade war, everyone wonders what Europe’s — and in particular, the UK’s — role will be in the global economy.

Some count it out. Others argue that it will be a leader in ethical business, leveraging the EU’s tough privacy rules implemented last year. But the reality will probably be different: Britain looks set to be the AI R&D lab of the world.

In the past, the main driver was the excellent universities like Oxbridge, Imperial and UCL. They trained the talent that now works at leading US technology companies. But now there’s much more happening. In the last 18 months, US technology companies have made deep inroads into the UK ecosystem to strengthen their AI products.”

The stakes have never been higher

Benaich pointed towards Lyft acquiring Blue Vision Labs for 3D map creation, Niantic acquiring Matrix Mill for real-world mobile AR, Facebook acquiring Bloomsbury.AI for natural language expertise and DeepMind Healthcare folding into the parent company’s healthcare unit.

What’s more, he went on to add, large financing rounds are increasingly available to the best technology companies building intelligent systems in their products. Graphcore secured a $200 million Series D, Darktrace closed a $50 million Series E, and UiPath raised close to $1 billion in three rounds over 12 months.

Naturally, being part of this ecosystem himself, Benaich highlighted that new venture firms built from the ground up for the AI community exist to scout and support exceptional AI talent in Europe. The goal? Building globally competitive companies driven by intelligent systems. Air Street Capital would be a prime example, and it looks like Benaich is on a mission.

In addition to Air Street Capital, he has also founded the Research and Applied AI Summit, which he dubs “a global community of AI entrepreneurs, researchers, and operators who are focused on the science and applications of AI technology.”

Benaich said that over five years, they had attracted founders and leadership from many US technology companies (such as Francois Chollet from Google Brain and Chris Ré from Stanford among others) to speak in London for the first time. They have also showcased early on founders from Graphcore, SwiftKey, Bloomsbury.AI, Benevolent.AI, and LabGenius, who have achieved significant milestones or exited their companies.

Lastly, Benaich’s non-profit, the RAAIS Foundation, exists to support education and research in AI for the common good. The RAAIS Foundation is the first backer of Open Climate Fix and OpenMined, which works on climate change and privacy-preserving AI, respectively.

The reason they are doing all of this? “The stakes have never been higher.”

Content retrieved from: https://www.zdnet.com/article/ai-applications-chips-deep-tech-and-geopolitics-in-2019-the-stakes-have-never-been-higher/.

Categories
knowledge connexions

AI applications, chips, deep tech, and geopolitics in 2019: The stakes have never been higher

The state of AI in 2019 report analysis with report author, AI expert, and venture capitalist Nathan Benaich continues. High-profile applications, funding, and the politics of AI

It’s the time of the season for AI reports. As we noted earlier, the last few days saw the publication of not 1, but three top-notch reports on the state of AI. People working in VCs have authored all of them, and keeping a close eye on all things AI: From technological breakthroughs to implications in the economy and society at large.

Having covered key technological breakthroughs already, we extend the discussion on the implications of AI with Nathan Benaich, co-author of the State of AI Report 2019, Air Street Capital and RAAIS founder. Benaich co-authored the report with AI angel investor and UCL IIPP visiting professor Ian Hogarth.

Benaich and Hogarth have also drawn on the expertise of prominent figures such as Google AI Researcher and the lead of Keras Deep Learning framework François Chollet, VC and AI thought leader Kai-Fu Lee, and Facebook AI Researcher Sebastian Riedel.

AI applications: RPA and autonomous vehicles

Much of the Q&A with Benaich focused on the geopolitics of AI. That’s not to say Benaich and Hogarth’s report does not cover topics such as talent, infrastructure, or applications — it does, extensively. But with such a full plate, one has to pick.

As far as talent is concerned, there is a consensus among experts: AI talent is highly sought after (and rewarded) and investment in training is on the rise. Nonetheless, the talent shortage in AI continues to be a major bottleneck to the broad adoption of the technology across the industry.

One approach to mitigate this is AutoML, that is to say using machine learning to automate an increasing part of the process of applying machine learning, in a sort of recursive fashion. In the report, AutoML is shown to de-novo design neural networks that are better than those designed by humans to run on resource-constrained mobile devices, for example.

The macro picture remains hot. Funds invested in AI grew by almost 80 percent in 2018 compared to 2017, exceeding $27 billion per year, with North America leading the way at 55 percent market share. Some of the application areas this capital has been pouring into as emphasized in the report are robotics (mainly in manufacturing and logistics), RPA (Robotic Process Automation), healthcare, demand forecasting, autonomous vehicles, and text analysis.

RPA, which is not related to robotics, is “an overnight enterprise success, 15 years in the making”, as the report states. Benaich noted that industry adoption of RPA appears to be growing at a clip, mostly as a result of the benefits it delivers to enterprises: Reduced operating costs and increased operational nimbleness to compete with new entrants.

RPA companies saw massive funding rounds: UiPath raised $800M across two rounds in 2018 and one round in 2019, while Automation Anywhere raised $550 million across two rounds in 2018. As mentioned in FirstMark’s report, however, there are reasons to be cynical about RPA: “RPA, at this stage at least, is more about automation than intelligence, more about rules-based solutions than AI.” Benaich agrees.

Another high-profile area of application is autonomous vehicles (AV). As Benaich and Hogarth note, self-driving cars are now a game for multi-billion-dollar balance sheets. They list spending by the likes of Waymo, Uber, Cruise, and Ford to make their case. But despite growth in investment and live AV pilots in California and elsewhere, some players have missed launch dates, while others remain silent.

Benaich and Hogarth point out that while the average Californian drives 14,435 miles per year, only 11/63 companies had driven more than this in 2018. Waymo drove more than one million miles in 2018, nearly three times as much as second best GM Cruise and 16-times as much as third best Apple. As for Tesla — it does not report its disengagement metrics to the California DMV.

Allegedly, however, Tesla has more data than any of the other players, giving it a leg up in the race. Tesla also designs its own AI chip to power the compute needed on board. This is another red hot area for innovation, as it is driving the capabilities of AI. We have covered some of the pioneers in this space, such as Graphcore, Habana, and GreenWaves.

AI chips, deep tech, geopolitics: China’s rapid growth

Benaich believes the timing is right to develop novel chips that are purpose-built for training and inference of AI models:

“We think this is true because of industry adoption of AI models for several large-scale use cases, especially in consumer internet. As a result, chip designers have a clear customer to design for. Designing chips, however, is an endeavor that is very capital intensive and requires significant domain experience that can only be acquired over many many years.”

This is also closely linked to geopolitics, as per Benaich’s reasoning. Companies building this kind of “deep” or “core” sector-agnostic technology comprise a tenth of AI startups, but they punch above their weight, attracting a fifth of venture capital investment:

“When it comes to ‘deep tech’ (for example, semiconductors), the US (along with other key countries like South Korea and the UK) remains dominant. This means that China remains heavily dependent on imports for these kinds of technologies. Indeed, China spends seven-times more money on importing semiconductors than it does selling them for export.”

As Ian Hogarth argued in his AI Nationalism essay, “China will certainly try to close this critical trade deficit, and the $140 billion ‘Big Fund’ demonstrates the commitment the government has to narrow the deficit. We also believe that China’s leading technology companies will ramp up their acquisition of deep tech companies from Europe.”

Flag of China

China is making rapid progress in AI, having more or less caught up with the West

Benaich and Hogarth also include predictions in their report. Amongst their 2018 predictions was a merger/acquisition north of $5 billion that would subsequently to be blocked. While this has yet to materialize, the authors still back their predictions. Benaich pointed out that the Chinese technology ecosystem is growing extremely rapidly:

“Of particular note is the ecosystem’s focus on nurturing the growth of AI-first technology companies. By recent counts, China is home to the largest number of AI startups valued over $1 billion. The pace with which these AI startups acquire scale is arguably second to none in the world.

With regards to fundamental research progress, we can consider a) the number of papers accepted into leading academic research conferences, b) the citation count of these papers, and c) the international ranking of universities for related courses such as computer science and engineering.

Looking at the 1st and 2nd measure, China’s contribution to global AI research output is on an upswing. For the thirrd measure, we can see that US and European universities still account for the overwhelming constituency of the top 20 institutions in global rankings. Having said that, Tsinghua University and Peking University are both in the top 20 for computer science and engineering courses.”

Will Europe, or the UK, be the AI R&D lab of the world?

Benaich said that although China is lagging by some measures, the ecosystem is undoubtedly on an upswing in the right direction with immense resources driving its growth. He also noted there is already a firm decoupling between the consumer internet within China and outside of China: Alibaba, Tencent, and Baidu are orders of magnitude more influential in China than Google, Amazon, or Facebook.

This is why Benaich and Hogarth have dedicated an entire section of their report to China. Another part is dedicated to AI and politics. Since Benaich and Hogarth are both based in London, the UK, Benaich’s take on European and British prospects are of particular interest:

“We are in a period of incredible transformation. The economy is changing. Governance is in flux. And the only way we can tackle our toughest societal challenges is with the help of powerful technologies such as AI — workable, safe, ethical AI. That is where Europe’s unique strengths lie, at the fulcrum between China and America’s AI rivalry.”

its-impossible-to-prepare-brexit-delay-f-5cb0ad0cfe727300bade6fc0-1-apr-16-2019-15-29-11-poster.jpg

Europe’s unique strengths lie at the fulcrum between China and America’s AI rivalry, argues Benaich, who also sees a role for post-Brexit UK

Benaich believes the European technology industry has flourished over the past decade, and a new ecosystem with both sophisticated and sustainable financing is emerging:

“This will have a major impact on Europe and Britain’s AI fortunes for years to come. The context is important. At a time of Brexit and a US-China trade war, everyone wonders what Europe’s — and in particular, the UK’s — role will be in the global economy.

Some count it out. Others argue that it will be a leader in ethical business, leveraging the EU’s tough privacy rules implemented last year. But the reality will probably be different: Britain looks set to be the AI R&D lab of the world.

In the past, the main driver was the excellent universities like Oxbridge, Imperial and UCL. They trained the talent that now works at leading US technology companies. But now there’s much more happening. In the last 18 months, US technology companies have made deep inroads into the UK ecosystem to strengthen their AI products.”

The stakes have never been higher

Benaich pointed towards Lyft acquiring Blue Vision Labs for 3D map creation, Niantic acquiring Matrix Mill for real-world mobile AR, Facebook acquiring Bloomsbury.AI for natural language expertise and DeepMind Healthcare folding into the parent company’s healthcare unit.

What’s more, he went on to add, large financing rounds are increasingly available to the best technology companies building intelligent systems in their products. Graphcore secured a $200 million Series D, Darktrace closed a $50 million Series E, and UiPath raised close to $1 billion in three rounds over 12 months.

Naturally, being part of this ecosystem himself, Benaich highlighted that new venture firms built from the ground up for the AI community exist to scout and support exceptional AI talent in Europe. The goal? Building globally competitive companies driven by intelligent systems. Air Street Capital would be a prime example, and it looks like Benaich is on a mission.

In addition to Air Street Capital, he has also founded the Research and Applied AI Summit, which he dubs “a global community of AI entrepreneurs, researchers, and operators who are focused on the science and applications of AI technology.”

Benaich said that over five years, they had attracted founders and leadership from many US technology companies (such as Francois Chollet from Google Brain and Chris Ré from Stanford among others) to speak in London for the first time. They have also showcased early on founders from Graphcore, SwiftKey, Bloomsbury.AI, Benevolent.AI, and LabGenius, who have achieved significant milestones or exited their companies.

Lastly, Benaich’s non-profit, the RAAIS Foundation, exists to support education and research in AI for the common good. The RAAIS Foundation is the first backer of Open Climate Fix and OpenMined, which works on climate change and privacy-preserving AI, respectively.

The reason they are doing all of this? “The stakes have never been higher.”

Content retrieved from: https://www.zdnet.com/article/ai-applications-chips-deep-tech-and-geopolitics-in-2019-the-stakes-have-never-been-higher/.