Digital Ocean’s Kubernetes service is now generally available

Like any ambitious cloud infrastructure player, Digital Ocean also recently announced a solution for running Kubernetes clusters on its platform. At KubeCon + CloudNativeCon Europe in Barcelona, the company today announced that Digital Ocean Kubernetes is now generally available.

With this release, the company is also bringing the latest Kubernetes release (1.14) to the platform and developers who use the service will able to schedule automatic patch version upgrades, too.

Now that it’s generally available, Digital Ocean is also bringing the service to all of its data centers around the world and introducing a few new features, too. These include a new guided configuration experience, for example, which moves users from provisioning to deploying clusters. The company is also introducing new advanced health metrics so developers can see what’s happening in their clusters. These include data about pod deployment status, CPU and memory usage, and more.

It’s also launching new open APIs so that third-party tools can more easily integrate support for Digital Ocean Kubernetes into their own solutions.

Soon, the company will also launch a marketplace for 1-click apps for Kubernetes, that will make it far easier for its users to deploy applications into a Kubernetes cluster. This feature will be based on the open-source Helm project, which is already the de facto standard for Kubernetes package management.

Source link Read the rest

Talk key takeaways from KubeCon 2019 with TechCrunch writers

The Linux Foundation’s annual KubeCon conference is going down at the Fira Gran Via exhibition center in Barcelona, Spain this week and TechCrunch is on the scene covering all the latest announcements.

The KubeCon/CloudNativeCon conference is the world’s largest gathering for the topics of Kubernetes, DevOps and cloud-native applications. TechCrunch’s Frederic Lardinois and Ron Miller will be on the ground at the event. Wednesday at 9:00 am PT, Frederic and Ron will be sharing with Extra Crunch members via a conference call what they saw and what it all means.

Tune in to dig into what happened onstage and off and ask Frederic and Ron any and all things Kubernetes, open-source development or dev tools.

To listen to this and all future conference calls, become a member of Extra Crunch. Learn more and try it for free.

Source link Read the rest

Microsoft aims to train and certify 15,000 workers on AI skills by 2022

Microsoft is investing in certification and training for a range of AI-related skills in partnership with education provider General Assembly, the companies announced this morning. The goal is to train some 15,000 people by 2022 in order to increase the pool of AI talent around the world. The training will focus on AI, machine learning, data science, cloud and data engineering and more.

In the new program’s first year, Microsoft will focus on training 2,000 workers to transition to an AI and machine learning role. And over the full three years, it will train an additional 13,000 workers with AI-related skills.

As part of this effort, Microsoft is joining General Assembly’s new AI Standards Board, along with other companies. Over the next six months, the Board will help to define AI skills standards, develop assessments, design a career framework and create credentials for AI skills.

The training developed will also focus on filling the AI jobs currently available where Microsoft technologies are involved. As Microsoft notes, many workers today are not skilled enough for roles involving the use of Azure in aerospace, manufacturing and elsewhere. The training, it says, will focus on serving the needs of its customers who are looking to employ AI talent.

This will also include the creation of an AI Talent Network that will source candidates for long-term employment as well as contract work. General Assembly will assist with this effort by connecting its 22 campuses and the broader Adecco ecosystem to this jobs pipeline. (GA sold to staffing firm Adecco last year for $413 million.)

Microsoft cited the potential for AI’s impact on job creation as a reason behind the program, noting that up to 133 million new roles may be created by 2022 as a result of the new technologies. Of course, … Read the rest

VMware acquires Bitnami to deliver packaged applications anywhere

VMware announced today that it’s acquiring Bitnami, the package application company that was a member of the Y Combinator Winter 2013 class. The companies didn’t share the purchase price.

With Bitnami, the company can now deliver more than 130 popular software packages in a variety of formats, such as Docker containers or virtual machine, an approach that should be attractive for VMware as it makes its transformation to be more of a cloud services company.

“Upon close, Bitnami will enable our customers to easily deploy application packages on any cloud — public or hybrid — and in the most optimal format — virtual machine (VM), containers and Kubernetes helm charts. Further, Bitnami will be able to augment our existing efforts to deliver a curated marketplace to VMware customers that offers a rich set of applications and development environments in addition to infrastructure software,” the company wrote in a blog post announcing the deal.

Per usual, Bitnami’s founders see the exit through the prism of being able to build out the platform faster with the help of a much larger company. “Joining forces with VMware means that we will be able to both double-down on the breadth and depth of our current offering and bring Bitnami to even more clouds as well as accelerating our push into the enterprise,” the founders wrote in a blog post on the company website.

Holger Mueller, an analyst at Constellation Research says the deal fits well with VMware’s overall strategy. “Enterprises want easy, fast ways to deploy packaged applications and providers like Bitnami take the complexity out of this process. So this is a key investment for VMware that wants to position itselfy not only as the trusted vendor for virtualizaton across the hybrid cloud, but also as a trusted application delivery vendor,” … Read the rest

Microsoft open-sources a crucial algorithm behind its Bing Search services

Microsoft today announced that it has open-sourced a key piece of what makes its Bing search services able to quickly return search results to its users. By making this technology open, the company hopes that developers will be able to build similar experiences for their users in other domains where users search through vast data troves, including in retail, though in this age of abundant data, chances are developers will find plenty of other enterprise and consumer use cases, too.

The piece of software the company open-sourced today is a library Microsoft developed to make better use of all the data it collected and AI models it built for Bing .

“Only a few years ago, web search was simple. Users typed a few words and waded through pages of results,” the company notes in today’s announcement. “Today, those same users may instead snap a picture on a phone and drop it into a search box or use an intelligent assistant to ask a question without physically touching a device at all. They may also type a question and expect an actual reply, not a list of pages with likely answers.”

With the Space Partition Tree and Graph (SPTAG) algorithm that is at the core of the open-sourced Python library, Microsoft is able to search through billions of pieces of information in milliseconds.

Vector search itself isn’t a new idea, of course. What Microsoft has done, though, is apply this concept to working with deep learning models. First, the team takes a pre-trained model and encodes that data into vectors, where every vector represents a word or pixel. Using the new SPTAG library, it then generates a vector index. As queries come in, the deep learning model translates that text or image into a vector and the library finds the … Read the rest