Jeffrey Katzenberg’s streaming service Quibi books $100M in ad sales ahead of launch

Quibi, the short-form video platform founded by Jeffrey Katzenberg, hasn’t even launched, but has already booked $100 million in advertising sales, according to a report from The WSJ this morning. The company, which aims to cater to younger viewers with premium content chopped up into “quick bites,” says it has already booked advertisers, including Protector & Gamble, Pepsi Co., Anheuser-Busch InBev, Walmart, Progressive and Google.

It still has around $50 million in unsold ad inventory ahead of launch.

It’s hard to imagine how a service like Quibi will compete in a market dominated by paid streamers like Netflix and free services like YouTube — both preferred by a younger demographic. But Quibi has been raising massive amounts of money to take them on. In May, it was reported that Quibi was going after another billion in funding, on top of the billion it had already raised.

Beyond the industry’s big bet on Katzenberg himself, Quibi has booked big-name talent, including Steven Spielberg and Guillermo del Toro, and is filming a show about Snapchat’s founding, which may draw in millennial viewers.

But it sounds like Quibi may also be relying on gimmicks — like Spielberg’s horror series that you can only watch at night (when it’s dark outside). Not to mention the very idea that Quibi thinks it’s invented a new kind of media that falls between today’s short-form and traditional TV-length or movie-length content found elsewhere.

On Quibi, shows are meant to be watched on the go, through segments that are around 7 to 10 minutes long. Some of the content will be bigger, more premium productions, while others will be more akin to what you’d find on cable TV or lower-cost daily news programming.

The service will launch April 6, 2020 with two tiers. A $4.99 per … Read the rest

Humanising Autonomy pulls in $5M to help self-driving cars keep an eye on pedestrians

Pretty much everything about making a self-driving car is difficult, but among the most difficult parts is making sure the vehicles know what pedestrians are doing — and what they’re about to do. Humanising Autonomy specializes in this, and hopes to become a ubiquitous part of people-focused computer vision systems worldwide.

The company has raised a $5.3 million seed round from an international group of investors on the strength of its AI system, which it claims outperforms humans and works on images from practically any camera you might find in a car these days.

HA’s tech is a set of machine learning modules trained to identify different pedestrian behaviors — is this person about to step into the street? Are they paying attention? Have they made eye contact with the driver? Are they on the phone? Things like that.

The company credits the robustness of its models to two main things. First, the variety of its data sources.

“Since day one we collected data from any type of source — CCTV cameras, dash cams of all resolutions, but also autonomous vehicle sensors,” said co-founder and CEO Maya Pindeus. “We’ve also built data partnerships and collaborated with different institutions, so we’ve been able to build a robust data set across different cities with different camera types, different resolutions and so on. That’s really benefited the system, so it works in nighttime, rainy Michigan situations, etc.”

Notably their models rely only on RGB data, forgoing any depth information that might come from lidar, another common sensor type. But Pindeus said that type of data isn’t by any means incompatible, it just isn’t as plentiful or relevant as real-world, visual-light footage.

In particular, HA was careful to acquire and analyze footage of accidents, because these are especially informative cases of failure of AVs … Read the rest

Only 72 hours left to save big on passes to Disrupt SF 2019

Seventy-two hours to save. That’s how much time remains on super-early-bird pricing for passes to Disrupt San Francisco 2019, which takes place October 2-4. If you plan to attend TechCrunch’s flagship event dedicated to bold, early-stage startuppers — and why wouldn’t you — you have until June 21 at 11:59 p.m. (PT) to score the best price. Depending on the type of pass you buy, you can save serious cheddar — up to $1,800. Choose the budget-friendly payment plan option during checkout and you can pay for your pass over time. It’s all geared to be as easy on the purse strings as possible, so buy your pass now and save.

Moscone North Convention Center will be home to more than 10,000 attendees from around the world — including startups, exhibitors and media outlets — for three jam-packed days of programming across 14 categories. It’s where you’ll find both the present and future of technology under one roof.

Every Disrupt event features an amazing lineup of speakers. We’re talking interviews and panel discussions with some of the world’s most influential names in tech and investing — and that tradition continues at Disrupt SF 2019. Here’s just one example of the presentations you’ll experience.

Cybersecurity ranks as a major concern that affects everyone — consumers, businesses and governments. We’re thrilled that Jeanette Manfra, homeland security assistant director and a senior executive at the department’s Cybersecurity and Infrastructure Security Agency (CISA), will grace the stage. One of the government’s most experienced cybersecurity civil servants, Manfra currently leads the effort to protect and strengthen our nation’s vital infrastructure, including the power grid and water supplies. We can’t wait to hear what she has to say about the government’s cybersecurity efforts.

Experience Startup Alley’s ocean of opportunity. You’ll find … Read the rest

CMU researchers use computer vision to see around corners

Future autonomous vehicle and other machine intelligence systems might not need line-of-sight to gather incredibly detailed image data: New research from Carnegie Mellon University, the University of Toronto and University College London has devised a technique for “seeing around corners.”

The method uses special sources of light, combined with sensors and computer vision processing, to effectively infer or rebuild extremely detailed imagery, much more detailed than has been possible previously, without having photographed it or otherwise “viewed” it directly.

There are some limitations — so far, researchers working on the project have only been able to use this technique effectively for “relatively small areas,” according to CMU Robotics Institute Professor Srinivasa Narasimhan.

That limitation could be mitigated by employing this technique alongside others used in the field of non-line-of-site (or NLOS) computer vision research. Some such techniques are already in use in the market, including how Tesla’s Autopilot system (and other driver-assist technologies) makes use of reflected or bounced radar signals to see around the cars immediately in front of the Tesla vehicle.

The technique used in this new study is actually similar to what happens in a lidar system used in many autonomous vehicle systems (though Tesla famously eschews use of laser-based vision systems in its tech stack). CMU and its partner institutions use ultrafast laser light in their system, bouncing it off a wall to light an object hidden around a corner.

Sensors then capture the reflected light when it bounces back, and researchers measure and calculate how long it took for the reflected light to return to the point of origin. Taking a number of measurements, and using information regarding the target object’s geometry, the team was able to then reconstruct the objects with remarkable accuracy and detail. Their method was so effective that it even works … Read the rest

Text IQ, a machine learning platform for parsing sensitive corporate data, raises $12.6M

Text IQ, a machine learning system that parses and understands sensitive corporate data, has raised $12.6 million in Series A funding led by FirstMark Capital, with participation from Sierra Ventures.

Text IQ started as co-founder Apoorv Agarwal’s Columbia thesis project titled “Social Network Extraction From Text.” The algorithm he built was able to read a novel, like Jane Austen’s “Emma,” for example, and understand the social hierarchy and interactions between characters.

This people-centric approach to parsing unstructured data eventually became the kernel of Text IQ, which helps corporations find what they’re looking for in a sea of unstructured, and highly sensitive, data.

The platform started as a tool used by corporate legal teams. Lawyers often have to manually look through troves of documents and conversations (text messages, emails, Slack, etc.) to find specific evidence or information. Even using search, these teams spend loads of time and resources looking through the search results, which usually aren’t as accurate as they should be.

“The status quo for this is to use search terms and hire hundreds of humans, if not thousands, to look for things that match their search terms,” said Agarwal. “It’s super expensive, and it can take months to go through millions of documents. And it’s still risky, because they could be missing sensitive information. Compared to the status quo, Text IQ is not only cheaper and faster but, most interestingly, it’s much more accurate.”

Following success with legal teams, Text IQ expanded into HR/compliance, giving companies the ability to retrieve sensitive information about internal compliance issues without a manual search. Because Text IQ understands who a person is relative to the rest of the organization, and learns that organization’s “language,” it can more thoroughly extract what’s relevant to the inquiry from all that unstructured data in Slack, email, … Read the rest