Artificial intelligence in a post-pandemic world of work and skills

Cedefop has been monitoring the adoption of artificial intelligence and new digital technologies by EU Member States, as these are becoming part of the EU’s new reality in a post-coronavirus world.

With its unique ability to identify and ‘learn’ from data patterns and to develop predictive mappings between variables – machine and deep learning – artificial intelligence (AI) has proved to be an indispensable tool in the fight against the coronavirus pandemic. AI has enabled the deployment of predictive models of potential disease contagion and containment, and has been used for screening and tracking patients. In addition to health care purposes,

AI has been deployed across the globe to improve understanding of the potential consequences of the viral infection for different economy sectors. Companies have increasingly relied on machine-learning-enabled systems to reengineer production delivery in the face of a massive disruption in supply chains. Policy-makers have also turned to AI technologies due to their great promise in strengthening the quality of remote education delivery, at times where schools and education systems struggle to remain accessible to learners.

Is coronavirus reinforcing automation?

Not long before the coronavirus outbreak, fears about AI and smart machines resulting in a jobless society were widespread, noted Cedefop expert Konstantinos Pouliakas in the recent symposium ‘AI in the world of work’, under the auspices of the German EU Presidency. While a 2013 University of Oxford study cautioned that about half of all jobs in advanced economies may become extinct due to advancing machine learning methods, subsequent studies deconstructing jobs by task composition tended to dispel such fears of extensive job loss. An analysis using data from the first Cedefop European skills and jobs survey  showed that the share of EU jobs facing very high risk of being automated by new digital technologies is close to 14%, although about two in five EU jobs still face a high probability of substantial transformation.

The coronavirus crisis has given new rise to concerns about automation in labour markets, with social distancing measures driving companies and societies to adopt new digital and data-driven technologies. Early predictions that Covid-19 will have a positive automation effect may, however, be overstated. Firms’ automation incentives may be partially offset by the lower aggregate demand in economies following the pandemic disruption, while higher uncertainty and credit constraints hold back their investment decisions. Occupations identified as ‘high-risk’ due to coronavirus exposure and social distancing have also been found to correlate weakly with those facing higher automation risk. Many of the occupations and sectors mostly affected by Covid-19 are typically in the service sector (hospitality, leisure, retail) and are heavily reliant on interpersonal skills, which are less susceptible to replacement by AI technologies.

From technical to economic feasibility

A recently published Cedefop report found that such earlier studies only focus on the technical feasibility of some job tasks being replaced by machine learning algorithms. They fail to acknowledge that firms’ decisions to automate depend on a combination of factors including the ‘business case’ for adopting new technologies, their cost, diffusion hurdles, the relative supply and price of skill and labour, uncertainty in investment decisions and shifting social attitudes.

The Cedefop report shows that, when accounting for such factors, firms that were early adopters of new technologies were more likely to experience future employment gains. The average employment decline in the group of occupations deemed to be ‘fully automatable’ by earlier studies has only been -2%, which is rather feeble, given that we are already a quarter to one half into the timeframe in which massive job losses were predicted.

Strengthening AI in vocational education and training

These findings highlight that AI technologies may help the transition to better-quality jobs and increase demand for skills insulated from automation, such as creativity, leadership, organisational and interpersonal communication skills. Interaction with digital devices is also a key trait of occupations with lower automation risk, all the more significant in the coronavirus era given the growing need for workers to carry out their jobs remotely.

Although not without obstacles, the transition from analogue to digital vocational education and training (VET) systems is progressing steadily in EU Member States, as revealed by a new series of Cedefop thematic insights focused on VET for the future of work. Even before the coronavirus shock, several EU countries had started investing in the development of online and open learning tools and environments. As the need for distance learning increased, more have also been looking into AI technologies as a means of improving personalised learning solutions and open education resources, which can be tailored and adapted to students’ learning abilities. AI tools can also monitor learning difficulties, identify early warning signs of possible student failure, and carry out remote assessment.

The Cedefop thematic insights reports identified several key response areas addressed by EU Member States in their efforts to adapt their VET systems to AI and automation, specifically by:

  • planning for AI: adopting specific AI strategies and revising IVET and CVET strategies, developing multi-stakeholder expert groups and public-private partnerships to map AI capabilities;
  • developing AI-based learning solutions for classrooms and enterprises: innovation labs and other pilot AI projects for knowledge exchange between companies and different stakeholders;
  • learning (about) AI: teaching teachers and the public about AI capabilities via user-friendly online courses;
  • applying AI: using AI methods for the development of new skills classifications or analysis of training curricula and VET programmes based on their match/mismatch to labour market needs;
  • adapting VET systems to AI: considering the introduction of new or revised education and training curricula and programmes (such as robotics, computational thinking, machine learning, data science, cybersecurity, automation engineering);
  • coping with AI: developing continuing VET programmes to support workers affected by automation and structural labour market changes.

As part of its Digitalisation, AI and the future of work project, Cedefop will continue carrying out research and collecting comparative information on the adoption of AI and new digital technologies in EU job markets and VET systems in the post-coronavirus world. Stay tuned for the 2nd wave of the Cedefop European skills and jobs survey, which will focus on the impact of changing digital technologies and automation on the skill requirements, skill mismatches and continuing education and training of EU adult workers.

Taken from : AAAI Alert (https://www.cedefop.europa.eu/en/news-and-press/news/artificial-intelligence-post-pandemic-world-work-and-skills)

Check out examples of AI reaction areas in EU Member States (+UK).

The Dangers of Automating Social Programs

wheelchair on crumbling path, illustration

Credit: The Verge

Ask poverty attorney Joanna Green Brown for an example of a client who fell through the cracks and lost social services benefits they may have been eligible for because of a program driven by artificial intelligence (AI), and you will get an earful.

There was the “highly educated and capable” client who had had heart failure and was on a heart and lung transplant wait list. The questions he was presented in a Social Security benefits application “didn’t encapsulate his issue” and his child subsequently did not receive benefits.

“It’s almost impossible for an AI system to anticipate issues related to the nuance of timing,” Green Brown says.

Then there’s the client who had to apply for a Medicaid recertification, but misread a question and received a denial a month later. “Suddenly, Medicaid has ended and you’re not getting oxygen delivered. This happens to old people frequently,” she says.

Another client died of cancer that Green Brown says was preventable, but the woman did not know social service programs existed, did not have an education, and did not speak English. “I can’t say it was AI-related,” she notes, “but she didn’t use a computer, so how is she going to get access to services?”

Such cautionary tales illustrate what can happen when systems become automated, the human element is removed, and a person in need lacks a support system to help them navigate the murky waters of applying for government assistance programs like Social Security and Medicaid.

There are so many factors that go into an application or appeals process for social services that many people just give up, Green Brown says. They can also lose benefits when a line of questioning ends in the system, but which may not tell their whole story. “The art of actual conversation is what teases out information,” she says. A human can tell something isn’t right simply by observing a person for a few minutes; determining why they are uncomfortable, for example, and whether it is because they have a hearing problem, or a cognitive or psychological issue.

“The stakes are high when it comes to trying to save time and money versus trying to understand a person’s unique circumstances,” Green Brown says. “Data is great at understanding who the outliers are; it can show fraud and show a person isn’t necessarily getting all benefits they need, but it doesn’t necessarily mean it’s correct information, and it’s not always indicative of eligibility of benefits.”

There are well-documented examples of bias in automated systems used to provide guidelines in sentencing criminals, predicting the likelihood of someone committing a future crime, setting credit scores, and in facial recognition systems. As automated systems relying on AI and machine learning become more prevalent, the trick, of course, is finding a way to ensure they are neutral in their decision-making. Experts have mixed views on whether they can be.

AI-based technologies can undoubtedly play a positive role in helping human services agencies cut costs, significantly reduce labor, and deliver faster and better services. Yet taking the human element out of the equation can be dangerous, agrees the 2017 Deloitte report “AI-augmented human services: Using cognitive technologies to transform program delivery.”

“AI can augment the work of caseworkers by automating paperwork, while machine learning can help caseworkers know which cases need urgent attention. But ultimately, humans are the users of AI systems, and these systems should be designed with human needs in mind,” the report states. That means they first need to determine the biggest pain points for caseworkers, and the individuals and families they serve. Issues to factor in are what are the most complex processes; can they be simplified; what activities take the most time and whether they can be streamlined, the report suggests.

Use of these systems is in the early stages, but we can expect to see a growing number of government agencies implementing AI systems that can automate social services to reduce costs and speed up delivery of services, says James Hendler, director of the Rensselaer Institute for Data Exploration and Applications and one of the originators of the Semantic Web.

“There’s definitely a drive, as more people need social services, to bring in any kind of computing automation and obviously, AI and machine learning are offering some new opportunities in that space,” Hendler says.

One of the ways an AI system can be beneficial is in instances in which someone seeking benefits needs to access cross-agency information. For example, if someone is trying to determine whether they can get their parents into a government-funded senior living facility, there are myriad questions to answer. “The potential of AI and machine learning is figuring out how to get people to the right places to answer their questions, and it may require going to many places and piecing together information. AI can help you pull it together as one activity.”

One of the main, persistent problems these systems have, however, is inherent bias, because data is input by biased humans, experts say.

Just like “Murphy’s Law,” which states that “anything that could go wrong, will,” Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence, says there’s a Murphy’s Law for AI: “It’s a law of unintended consequences, because a system looks at a vast range of possibilities and will find a very counterintuitive solution to a problem.”

“People struggle with their own biases, whether racist or sexist—or because they’re just plain hungry,” he says. “Research has shown that there are [judicial] sentencing differences based on the time of day.”

Machines fall short in that they have no “common sense,” so if a data error is input, it will continue to apply that error, Etzioni says. Likewise, if there is a pattern in the data that is objectionable because the data is from the past but is being used to create predictive models for the future, the machine will not override it.

“It won’t say, ‘this behavior is racist or sexist and we want to change that’; on the contrary, the behavior of the algorithm is to amplify behaviors found in the data,” he says. “Data codifies past biases.”

Because machine learning systems seek a signal or pattern in the data, “we need to be very careful in the application of these systems,” Etzioni says. “If we are careful, there’s a great potential benefit as well.”

To make AI and machine learning systems work appropriately, many cognitive technologies need to be trained and retrained, according to the Deloitte report. “They improve via deep learning methods as they interact with users. To make the most of their investments in AI, agencies should adopt an agile approach [with software systems], continuously testing and training their cognitive technologies.”

David Madras, a Ph.D. student and machine learning researcher at the University of Toronto (U of T), believes if an algorithm is not certain of something, rather than reach a conclusion, it should have the option to indicate uncertainty and defer to a human.

Madras and colleagues at U of T developed an algorithmic model that includes fairness. The definition of fairness they used for their model is based on “equalized odds,” which they found in a 2016 paper, “Equality of Opportunity in Supervised Learning,” by computer scientists from Google, the University of Chicago, and the University of Texas, Austin. According to that paper, Madras explains, “the model’s false positive and false negative rates should be equal for different groups (for example, divided by race). Intuitively, this means the types of mistakes should be the same for different types of people (there are mistakes that can advantage someone, and mistakes that can disadvantage someone).”

The U of T researchers wanted to examine the unintended side effects of machine learning in decision-making systems, since a lot of these models make assumptions that don’t always hold in practice. They felt it was important to consider the possibility that an algorithm could respond “I don’t know” or “pass,” which led them to think about the relationship between a model and its surrounding system.


“Humans are better than computers at exploring those grey areas around the edges of problems. Computers are better at the black-and-white decisions in the middle.”


“There is often an assumption in machine learning that the data is a representative sample, or that we know exactly what objective we want to optimize.” That has proven not to be the case in many decision problems, he says.

Madras acknowledges the difficulty of knowing how to add fairness to (or subtract unfairness from) an algorithm. “Firstly, unfairness can creep in at many points in the process, from problem definition, to data collection, to optimization, to user interaction.” Also, he adds, “Nobody has a great single definition of ‘fairness.’ It’s a very complex, context-specific idea [that] doesn’t lend itself easily to one-size-fits-all solutions.”

The definition they chose for their model could just as easily be replaced by another, he notes.

In terms of whether social services systems can be unbiased when the algorithm running them may have built-in biases, Madras says that when models learn from historical data, they will pick up any natural biases, which will be a factor in their decision-making.

“It’s also very difficult to make an algorithm unbiased when it is operating in a highly biased environment; especially when a model is learned from historical data, the tendency is to repeat those patterns in some sense,” Madras says.

Etzioni believes an AI system can be bias-free even when bias is input, although that is not an easy thing to achieve. An original algorithm tries to maximize consistency with data, he says, but that past data may not be the only criteria.

“If we can define a criterion and mathematically describe what it means to be free of bias, we can give that to the machine,” he says. “The challenge becomes describing formally or mathematically what bias means, and secondly, you have to have some adherence to the data. So there’s really a tension between consistency with the data, which is clearly desirable, and being bias-free.”

People are working so both consistency and being bias-free can be supported, he adds.

For AI to augment the work of government case workers and make social programs more efficient is to couple the technical progress being made with educating people on how to use these programs, Etzioni says.

“Part of the problem is when a human just blindly adheres to the recommendations of the system without trying to make sense of them, and the system says, ‘It must be true,’ but if the machine’s analysis is one output and a sophisticated person analyzes it, we find ourselves in the best of both worlds.”

AI, he says, really should stand for “augmented intelligence,” where technology plays a supporting role, he says.

“Humans are better than computers at exploring those grey areas around the edges of problems,” agrees Hendler. “Computers are better at the black-and-white decisions in the middle.”

The issue of transparency of algorithms and bias was discussed at a November 2017 conference held by the Paris-based Organization for Economic Cooperation and Development (OECD). Although several beneficial societal use-cases of AI were mentioned, researchers said the solution lies in addressing system bias from a policy perspective as well as a design perspective.

“Right now, AI is designed so as to optimize a given objective,” the researchers stated. “However, what we should be focusing on is designing AI that delivers results that are in line with peoples’ well-being. By observing human reactions to various outcomes, AI could learn through a technique called ‘cooperative inverse reinforcement learning’ what our preferences are, and then work towards producing results consistent with those preferences.”

AI systems need to be held accountable, says Alexandra Chouldechova, an assistant professor of statistics and public policy at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy.

“Systems fail to achieve their purported goals all the time,” Chouldechova notes. “The questions are: Why? Can it be fixed? Could it have been prevented in the first place?

“By being clear about a system’s intended purpose at the outset, transparent about its development and deployment, and proactive in anticipating its impact, we can hopefully reach a place where there will be fewer adverse unintended consequences.”

For the foreseeable future, Hendler believes humans and computers working together will outperform either one separately. For the partnership to work, a human must be able to understand the decision-making of the AI system, he says.

“We currently teach people to take the data and feed it into AI systems to get an ‘unbiased answer.’ That unbiased answer is used to make predictions and help people find services,” Hendler says. “The problem is, the data coming in has been chosen in various ways, and we don’t educate computer or data scientists how to know the data in your database will model the real world.”

This is certainly not a new problem. Hendler recalls the famous case of Stanislov Petrov, a Soviet lieutenant-colonel whose job was to monitor his country’s satellite system. In 1983, the computers sounded an alarm indicating the U.S. had launched nuclear missiles. Instead of launching a counterattack, Petrov felt something was wrong and refused; it turned out to be a computer malfunction. AI scientists, says Hendler, should learn from Petrov.

“The real danger is people over-trusting these ‘unbiased’ AI systems,” he says. “What I’m afraid of is most people don’t understand these issues … and just will trust the system the way they trust other computer systems. If they don’t know these systems have these limitations, they won’t be looking for the alternatives that humans are good at.”

* Further Reading

Madras, D., Creager, E., Pitassi, T., and Zemel, R.
Learning Adversarially Fair and Transferable Representations, 17 Feb. 2018, Cornell University Library, https://arxiv.org/abs/1802.06309

Buolamwini, J. and Gebru, T.
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research, 2018, Conference on Fairness, Accountability and Transparency. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

Dovey Fishman, T., Eggers, W.D., and Kishnani, P.
AI-augmented human services: Using cognitive technologies to transform program delivery, Deloitte Insights, 2017, https://www2.deloitte.com/insights/us/en/industry/public-sector/artificial-intelligence-technologies-human-services-programs.html

Zhao, J., Wang, T., Yatskar, M., Ordonez, V., and Chang, K..
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints, University of Virginia. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979–2989 Copenhagen, Denmark, Sept. 7–11, 2017. https://pdfs.semanticscholar.org/566f/34fd344607693e490a636cdbf3b92f74f976.pdf?_ga=2.37177120.1400811332.1523294823-1569884054.1523294823

Tan, S., Caruana, R., Hooker, G., and Lou, Y.
Auditing Black-Box Models Using Transparent Model Distillation With Side Information, 17 Oct. 2017, Cornell University Library, https://arxiv.org/abs/1710.06169

O’eil, C.
Weapons of Math Destruction. 2016. Crown Random House.

Hardt, M., Price, E., and Srebro, N.
Equality of Opportunity in Supervised Learning October 11, 2016 https://arxiv.org/pdf/1610.02413.pdf

Back to Top

Author

Esther Shein is a freelance technology and business writer based in the area of Boston, MA, USA.


©2018 ACM  0001-0782/18/10

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from permissions@acm.org or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.

Will 5G be necessary for self-driving cars?

  • 27 September 2018
A person reading a book in an autonomous carImage copyrightERICSSON
Image captionWill self-driving cars really need 5G to make them work?

Proponents of 5G say it will offer ultra-fast connections, speedier data downloads, and be able to handle millions more connections than 4G mobile networks can cope with today. One use for 5G is self-driving cars, but will they really need it?

The telecoms industry envisions autonomous cars equipped with hundreds of sensors collecting and receiving information all at once over a network.

It calls this concept “Vehicle-to-everything” (V2X).

To achieve this, the car needs to detect blind spots and avoid collisions with people, animals or other vehicles on the road.

As the car drives, its sensors will pick up information about:

  • weather and road conditions
  • accidents
  • obstacles and objects moving near the car

Once the information is gathered, either an on-board computer will make an instant decision, or the data could be sent into the cloud to be processed, and then a decision would be sent back to the vehicle.

Smarter than humans

Imagine a scenario where Car A is travelling down a highway at 80mph. Suddenly, Car B pulls out in front of Car A.

To avoid an accident, the sensors on both cars would need to talk to each other. As a result, Car A would brake, and Car B would speed up, in order to avoid a collision.

Autonomous carsImage copyrightGETTY IMAGES
Image captionEngineers want autonomous vehicles to connect to the cloud, as well as each other

“We need to look at how long it takes for the message to be transmitted between sensors and then get to the computer in each car, and then how long it takes for the computer to make a decision, and all of this has to be in less time than a human would take to make a decision – 2 milliseconds,” Jane Rygaard, of Finnish tech firm Nokia, tells the BBC.

“We need a network supporting this, and 5G is that network.”

UK national mapping agency Ordnance Survey agrees: “When you switch a light on, it turns on immediately. That’s what you need with autonomous cars – if something happens, the car needs to stop immediately. That’s why the high frequency 5G signals are required.”

But it’s not just about the car itself – technology firm Ericsson says that in the event of a major disaster, or severe congestion around a football stadium, authorities could send instant alerts to autonomous cars, warning them to use alternative routes instead.

Ericsson has conducted tests in Stockholm, Sweden with car manufacturer Volvo and truck maker Scania, using a counter-terrorism scenario whereby police were able to disable a hijacked connected truck or prevent it from entering certain geo-fenced locations.

Presentational grey line

Levels of automation

US engineering organisation SAE International has set out six categories of automation for cars:

  • Level Zero: not automated at all
  • Level One: some driver assist features
  • Level Two: car can accelerate and steer by itself, but driver must pay attention
  • Level Three: car can drive by itself on safe road conditions under 37 mph, but driver is still needed
  • Level Four: car can drive completely by itself, but only within a well-mapped area
  • Level Five: car can drive completely by itself, over any terrain, anywhere in the world

Research firm Gartner expects Level Three and Level Four autonomous vehicles to begin appearing in late 2018 in very small numbers, and by 2025, it expects that there will be more than 600,000 autonomous vehicles on the roads worldwide.

Presentational grey line

Millimetre wave antennas

Ordnance Survey says autonomous vehicles are possible with 5G, but initially, they will only be able to run in a well-mapped geographic area, such as a densely populated city.

The government agency is building a detailed 3D map of the UK that visualises all permanent fixtures like buildings, street signs and bridges, as well as temporary objects like Christmas decorations, cranes and hanging flower baskets – all of which could affect the strength of the 5G signal a car receives as it drives by.

In order for autonomous cars to simultaneously connect to the mobile network, existing 4G mobile antennas on buildings will not be enough – there will need to be lots of smaller millimetre wave antennas, located 200-300m apart from each other.

“For every one mobile base station we have today, you’ll probably need 60 or 70 millimetre wave transmitters and receivers,” explains Richard Woodling, a managing consultant with the Ordnance Survey.

Ford's self-driving test vehicle on the streets of MiamiImage copyrightFORD
Image captionFord’s self-driving test vehicle on the streets of Miami

It is unlikely that fully-autonomous cars will be possible for a long time to come, but Ford is hoping to launch a Level Four car in 2021.

To this end, Ford is mapping the roads and environment in Miami.

It has developed simulation software to try to predict all possible situations that a car might find itself in, so that it can eliminate unsafe outcomes.

But Mr Woodling is sceptical that an autonomous car in a city will be ready so soon.

“I don’t see it happening in my lifetime,” he says. “There’s no way you could put that in London and say we’re ready for everyone to have an autonomous vehicle – we’re a long way away from that.”

Iowa TV Station Uses Drones to Capture Video

By BARB ICKES, Quad-City Times

DAVENPORT, Iowa (AP) — To birds, they are UFOs and not to be trusted.

To drone operators, they are today’s way of doing business — a new tool of many trades.

To much of the world, the futuristic-looking gadgets are delivering a whole new perspective.

But the sky has limits.

When a camera gets its own name, it’s a big deal.

Moline-based WQAD-TV was ahead of the TV field two years ago when its first few photographers started studying for a test they never dreamed they’d be taking. They would have to prove to the Federal Aviation Administration that they knew how to safely fly the station’s expensive new drone. And that meant proving their understanding of physics, weather — even airport codes.

“I just made it a mission to get every photographer and every member of our digital team to be licensed pilots,” news director Alan Baker told the Quad-City Times . “The hardest part about it is taking the exam.

“Everybody got two months to study.”

After the first three photographers passed the reportedly challenging exam, “NED” began to take flight. An acronym for News Eight Drone, the industrial-model aircraft almost instantly became a popular part of the news team; its image even appearing on a news car.

With the capacity to deliver live video from the sky, WQAD’s drone fleet soared.

The station now has 12 of the aircraft — one for every photographer and one to spare. Though “Big NED” remains the largest and most powerful in the fleet, the others, nine “Little NEDs” and one “Phantom” can go live, too.

“I have the largest number of pilots in our company, which is Tribune Media,” Baker said. “I’m a former photographer myself. It’s a different way to shoot video than we’ve ever had before outside of a helicopter.

“It’s impressive.

“Viewer reaction has been very good. Drone videos we post online have been some of our most popular. One of our first was from the tornado in Ottawa last year. We had a bird’s-eye view of the damage.”

News drones don’t need a tornado touchdown to be called into service. Photographers are finding new ways to employ their aircraft in everyday stories.

“Whenever I was shooting, I was looking for ways to get high — up on a parking garage or a wall or fence; anything,” said Andy McKay, WQAD’s chief photographer. “The drones can go almost anywhere, and we use them in features, breaking news, general news.”

Photographer Jenny Hipskind said she has found low-flying uses for her drone, too.

“Sometimes, a foot above a cornfield gives you something really nice,” she said. “Low shots can be cool, too.

“Ag is a good use, along with breaking news of accidents where we can’t get close and flooding, ice jams and bridge shots.

“Birds will attack them sometimes, though. They think you’re a threat.”

Many professional production companies have replaced film cameras with drones, said Doug Froehlich, a Creative Services Producer at WQAD. The station also uses them to shoot video for commercials for clients, long-form video productions and for their own promotions.

The video is desirable because of its unique access and its quality, he said.

“NED, for instance, is extremely stable, even in high wind,” he said. “It’s like a tripod in the sky.”

The big drone requires two operators — one to run the drone controls and one to run the camera. But the smaller ones, which make up the bulk of the fleet, require just one operator. In the event that more than one drone is up, internal sensors prevent them from colliding. And the moment the pilot lets go of the controls, the drone stops flying and “parks” in mid-air.

“They add a whole new dimension to what we’re shooting in news,” photographer Stephanie Mattan said. “When I shot the new bridge at Sabula and got some video of the car ferry, it gave viewers a whole new perspective of that project.”

During the implosion of the old bridge at Sabula/Savanna, NED shot live while two other drones shot from different angles.

But drones are not above the law, and all licensed operators are subject to FAA rules, regarding access to airspace. Those rules are undergoing changes that aim to make it easier for pilots to fly more freely in airspace that is protected because of proximity to airports.

To the FAA, a drone is known as an unmanned aircraft system, or UAS.

And there are two types of operators: Licensed pilots and hobbyists.

In addition to passing an exam, licensed pilots must follow a whole host of UAS rules established by the FAA. And it is incumbent upon the operator to know the rules.

“Hobbyists get to do whatever they want,” Hipskind said. “The pros have all these rules.”

Rule number one: Airspace within a 5-mile radius of an airport is off limits, unless the pilot has an FAA-issued waiver.

Froehlich at WQAD has such a waiver, permitting him to fly in a half-mile radius of downtown Bettendorf. He sought the waiver, so the station could get video of Interstate 74 bridge construction, even though it is within the no-fly zone designation of the Quad-City International Airport.

It took him four months to get the FAA to approve it.

“The waiver process is long and slow,” said Baker, the news director. “It’s cumbersome. We’re looking forward to some relief.”

And it recently arrived.

As of Sept. 14, WQAD drone operators were able to get real-time approval on their requests to enter restricted airspace. Froehlich said he asked for permission to send up his drone just west of the airport on the day the system went live, and he received permission to do so “in less than 30 seconds.”

The Low Altitude Authorization and Notification Capability — LAANC — is a partnership between the FAA and private industry to support unmanned aircraft system access to previously restricted airspace. The new system took about six months to test and approve, and the FAA will continue to monitor its success.

Pilots simply submit an access request through a cell-phone app, which then goes to the local air traffic control tower. Requests are checked against airspace data, including temporary flight restrictions.

“Local authorities have established areas around the airport that are safe to fly for drone operations and qualify for automatic authorization,” an FAA spokesman said. “The local air traffic control facility creates gridded maps called UAS Facility Maps that define a maximum height for which an operation could be considered safe for automatic authorization. Also, as drone pilots plan their flights, they are reminded of restrictions in the area and notifications they should be aware of.”

Meanwhile, licensed pilots continue to have other rules to contend with, including a maximum altitude of 400 feet and maintaining visual sight of their drones at all times. Their aircraft cannot exceed 55 pounds and must have a ground speed of less than 100 miles an hour.

Other rules are common-sense based; like using caution when photographing a fire from above.

“We don’t want to get over the top of a fire where there’s too much heat, or we can upset firefighters,” McKay said. “We’ve met with Medic EMS and the Illinois State Police, so they know what we’re trying to do.

“Emergency responders are busy. They shouldn’t have to worry about what’s in the sky.”

In some cases, law enforcement can make drone rules on the fly. For instance, when police were being led to the remains of Iowa college student Mollie Tibbetts in a farm field in August, Baker said, they issued a temporary no-fly zone in the area.

Television news isn’t the only industry being impacted by unmanned aircraft systems.

Drones are widely used in real-estate listings and in nearly every level of law enforcement.

Brent Bult is regional sales manager for Vizzi Media Solutions in Clive, which does considerable real estate photography business in the Quad Cities.

He passed his drone operator’s license two years ago and said aerial photography for real estate listings has been taking off ever since.

“We have seen a big increase in the number of agents and their customers expecting aerial photography and video as the standard for their listings,” he said. “I would say 90 percent of what we do is outdoor photography for residential real estate. It’s very big.”

Vizzi offers a number of photo packages, beginning with three aerial shots for $100. Most drone photos are taken from the front and rear of a home, but homeowners frequently request specific shots, such as a back-yard swimming pool or a neighboring golf course or water view.

Bult said he has many agents with Ruhl & Ruhl, Mel Foster and Keller-Williams in the Quad Cities, so he frequently travels here.

“We’re seeing a big, big increase in volume and revenue in the Quad Cities,” he said. “Part of what we do is to educate local Realtors. Some of them are buying their own drones, but they aren’t getting licensed. They don’t realize the rules — that you can’t make money, collect that commission, as a hobbyist operator.

“Agents know their business is very competitive and, today, you have to have aerials with your listings. Homeowners expect it.”

Some housing markets aren’t quite so hot on aerials, though. Homes listed under $100,000 or those in foreclosure typically don’t generate drone traffic, Bult said.

Though he was unfamiliar with the FAA’s LAANC initiative, Bult said the system sounds like a promising way to avoid drone delays.

“If I’m in a no-fly zone or close to restricted airspace, my drone lets me know,” he said. “If I’m actually in the space, it won’t fly. That new FAA system sounds wonderful.”

Bettendorf Police Chief Keith Kimball’s department was on a mission in July to locate a specific drone. A 1-year-old was badly injured at Crow Creek Park after being struck by an out-of-control drone. The operator, a teenager, ultimately came forward.

The 19-year-old was cited and fined for operating a drone in a city park, which requires city permission in Bettendorf.

But this is not to say police are anti-drone.

“At this time, we do not have a drone or use a drone but, as the technology advances, I could see us getting one in the future,” Kimball said.

He then provided the following list of uses for drones in law enforcement: traffic accident investigation and reconstruction; search and rescue; active shooter response; SWAT/tactical operations; surveillance and crowd monitoring; crime scene analysis; hazardous materials incidents; bomb threats; surveying damage from natural disasters.

When a child fell into the Mississippi River from a dock at Schwiebert Riverfront Park in Rock Island on July 24, area firefighters searched for days by boat. An out-of-town volunteer search team came to Rock Island a week later and suggested that a drone be sent up to shoot video along the shoreline as an effective way to search for remains.

But the strategy didn’t take off before the 2-year-old’s body was found a couple days later near Muscatine.

Members of the boy’s family lamented that he may have been found much earlier; if only local law enforcement had a way to search the river from the sky.

___

Information from: Quad-City Times, http://www.qctimes.com

An AP Member Exchange shared by the Quad-City Times.

Copyright 2018 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Tags: Iowa

Hey, Alexa, Are Voice Commands The Future Of The Customer Service Experience?

Micah Solomon

Voice-based customer service (Alexa, Siri, Google Duplex and Google Assistant, auditory chatbots): Is this the future of customer service, customer support, and the overall customer experience? Certainly some big players think that this form of interaction, which is sometimes known as VUX, short for Voice-Based User Experience, or VUI, for Voice User Interface, is the coming wave.

Amazon has made a splash, of course, with everything Alexa-related.  Siri and her sisters (Nuance’s Nina, which you can white-label for your brand; the personable Dom for Domino’s, and so forth) are continually growing in reach. [Disclosure: I was an early investor  for one of the technologies now deployed in Siri.] AI-powered chatbots that respond to voice queries are becoming more ubiquitous.  And Google Duplex, a not-yet-ready-for-prime-time version of Google Assistant that can do outbound calling, has been most recently turning heads and blowing minds, even though it’s admittedly not yet ready for prime time.

The only trick with all of this is that by thinking of voice commands and voice interactions as “the thing of the future,” there’s a risk of making it anything but.

Let me explain. The reason that technology and consumer companies (and, often, consumers themselves) are so pumped about voice? Because voice commands, queries, and interactions seem natural. Other forms of input, like typing–whether with ten fingers on a full-sized keyboard or, worse, two thumbs on a phone–are presumed to be less so.

Nick Fox, VP of Assistant and Search for Google, talks about the Duplex program (AP Photo/Marcio Jose Sanchez)

But is voice interaction natural in all contexts? Sure, talking with Alexa as you move about the house in your jammies makes sense. Likewise in your hotel room via Ivy, the hospitality industry’s Watson-powered assistant from Go Moment that you’ll find in more than 300,000 guestrooms in some 1400 hotels. (Crucially, voice commands can also be a godsend for customers with manual dexterity disabilities or low vision. Although, as Christopher Wilkinson, Director of Product Design at Devbridge Group, points out, to get this right creates “an additional need for affordances in how voice-enabled devices are equipped to respond to varying types of audio input and output. Companies should consider allowing people to control the pacing of the response, the volume of the response, the time given to provide instruction, and the ability to adjust previous instructions as ways to make voice technology more accessible.”)

On the other hand, talking with a voice assistant or voice-driven chatbot when you’re in an open-concept workspace, airplane seat, restaurant, or public restroom (you know who you are)? That’s not such a natural fit.

MORE FROM FORBES

The future of the customer experience, in other words, belongs to those companies and brands that adapt to the desires of the customer. And the customer is going to have varying desires depending on context.  The future’s not only not one-size-fits all, it’s not even one-size-fits-one-customer.  The channel of customer support that works best for a customer in the morning over breakfast isn’t necessarily the same as what will work for them on a crowded subway heading home that afternoon.

Bold360, for example, is an AI-informed customer support platform that, alongside other channels and capabilities, offers the ability for customers to contact a brand via voice and get appropriate responses based on a version of natural language processing that Bold360 calls Natural Language Understanding. [Disclosure: I have done consultant work for Bold360.] Yet CMO Ryan Lester, Director of Engagement Technologies at LogMein, the maker of Bold360, tells me he doesn’t like to see the industry rush wildly toward voice without considering the integration and alternatives necessary to avoid turning customers off. “Although voice will undoubtedly become a key channel in the future, as companies begin to explore it, they need to do so in a holistic way.  Voice can’t be a siloed channel; it needs to be integrated into the experience seamlessly, be conversational and contextual, be easily shifted to other messaging channels and allow for easy access to a human agent when required.”

Raj Singh, President and CEO of Go Moment, the creator Ivy, makes a broader point: “One of the most important factors in making these technologies successful is to make sure they ‘know what they don’t know’—that they’re designed to bow out either when the speech detection isn’t sufficiently solid or the concepts being articulated call for human intervention–for example, in our hospitality context, a guest who is upset with service and really needs care and comfort from a hotel manager rather than a simulated human interacting with them over a speaker.”

Lester offers a final caution: “Like all forward-friendly technologies, the risk of overreach is a real one, and can result in customer frustration or even rebellion. In the case of voice, the particular risk with clunky deployment is that the novelty aspect will dominate, and the experience will ultimately wear thin.”

 

micah@micahsolomon.com – www.micahsolomon.com – (484)343-5881. Micah Solomon is an author, keynote speaker, trainer, consultant and influencer. Customer service, customer experience, company culture and hospitality.

Micah Solomon is an author, consultant, keynote speaker, influencer and trainer. Customer service, customer experience, company culture, hospitality. (emailchatweb).

Amdocs’ AI-powered Smartbot is making customer service in comms & media much more personalised

Amdocs debuts first bot designed and pre-integrated for the communications and media industries

Communications software giant Amdocs recently launched their Smartbot, a machine learning-based bot that enables digital service providers to provide customer care, sales and marketing engagements and deliver highly personalized, self-service interactions with customers that are simple, quick and helpful.

 Roni Dvir, Product Marketing Manager, Amdocs Digital Intelligence answers our questions about the Smartbot below.

What is the Amdocs Smartbot?

Amdocs SmartBot is the first bot that is intelligence-driven, pre-integrated with back-end, mission-critical systems and uniquely designed for the communication and media industry. These capabilities enable digital service providers (DSPs) to transform self-service engagements by making them highly personalized and contextual, as well as simple, efficient and effective.

Amdocs SmartBot leverages best-of-breed artificial intelligence (AI) and natural language processing (NLP) capabilities to engage customers in intuitive, personalized, contextual and intelligent conversations. Infused with 35 years of Amdocs domain expertise and pre-trained on business processes and telecom-specific intents, it provides intelligence-enabled bot-to-human customer experiences to meet consumers’ precise and immediate needs.

Amdocs SmartBot leverages aia, Amdocs’ intelligence platform, to inject intelligence into every customer engagement, including next best offer/action (NBO/NBA) offers based on the customer profile, past behaviors, current context, all in alignment with the organization’s marketing and customer care objectives. It understands context to execute transactions on behalf of users and expedites resolution through proactive service.

The integration with aia is a major industry differentiator as it boosts Amdocs SmartBot’s predictive capabilities and enables it to proactively deliver personalized engagements as the customer’s context changes throughout their journey.

  • Channel agnostic– Amdocs SmartBot helps digital service providers transform their customer experience on mobile, web and on social messaging platforms. This omni-channel experience includes text channels such as Facebook Messenger, Kik, Skype, Slack, SMS, Telegram and Twitter as well as voice assistant channels like IVR and Amazon Alexa.  Amdocs SmartBot also has the capability to seamlessly shift the bot interaction to a human agent in the digital service provider’s support center when needed.
  • Deeply integrated with a digital service provider’s core information systems such as CRM, order management and catalog, Amdocs SmartBot has a 360-degree view of the customer and the context of the interaction, which enables digital service providers to provide their customers with an end-to-end experience, with the potential to grow care-to-commerce revenue opportunities by making more relevant predictive care and promotional offers to customers.

How does it help media and entertainment companies increase customer satisfaction and drive customer experience?

The trend towards using chatbots as the medium of choice for customer engagements is well underway. Still, the most widespread ones in use today are those that answer only simple questions, such as basic product, price or bill information and high-level support requests. These are “rule-based” chatbots, which are programmed to understand predefined commands that are specific to the processes related to the request at hand. Ironically, while adequately functional, such chatbots can ultimately wind up having the opposite effect to that which both the consumer and the service provider seek: once a conversation veers slightly from predefined scripts, things can get complicated – since the chatbot is ill-equipped to handle dynamic dialogues.

Service providers who want to add chatbots to their customer engagement arsenal will want them to have capabilities that live up to their customer experience promise. To achieve this, they will need to overcome a number of strategic challenges:

  • Overcome the chasm that exists between virtual and live agents: When the typical, functional type of chatbot arrives at an impasse and doesn’t “know” how to answer a certain question or provide relevant information, the customer experience is placed at risk. To be effective, chatbots need to (a) be able to handle many more types of content that traditionally, only live agents handle; and (b) be seamlessly integrated with live agent channels, with smooth handoff that is transparent to the customer.
  • Ensure chatbots don’t miss out on care-to-commerce opportunities: Service providers who leverage chatbots for simple support information-driven interactions will miss out on unique revenue- generating opportunities. However, if a chatbot can “know” what a customer likes, needs and wants – and make relevant and timely marketing offers accordingly, this would significantly impact the top line.
  • Ensure the chatbot “understands” telecom-specific intents: Even if the chatbot can learn from every customer interaction and converse in a more human way, if it lacks industry domain knowledge, it will still lack the capability to accurately understand what the customer wants and needs.
  • Integrate the chatbot with mission- critical business systems: A chatbot that is not integrated with systems such as billing, order management, CRM and so on, will not have access to a full 360-degree profile of the customer. Without knowing what was purchased in the past or historical and current usage patterns and status, for example, they will not be able to provide relevant information and support, or predict what the customer may need next.
  • Meet digital consumers’ expectations for personalized conversations: The simple engagements currently prevalent with chatbots do not satisfy digital consumers’ needs for personalized and contextual conversations: they do not address their specific needs, their unique journey with the brand or the types of support best suited to them.
  • Ensure the bot learns from every engagement: Forthcoming customer interactions and experiences must be finely attuned to each individual customer, and delivered with great accuracy.

What are the benefits to digital service providers in terms of its ability to match up to their back office systems and infrastructure?

Pre-Integration with back-end/mission critical systems provides a single source of truth for customer data, product and promotions and order management, which is critical for ensuring consistent and personalized experiences. Amdocs SmartBot is flexible – working with both Amdocs and non-Amdocs BSS back-end systems.

Systems of record (e.g. billing and CRM systems) are updated with every chatbot interaction to support subsequent human assisted interactions. Also, root-cause analysis of customer interactions are used to improve customer experiences:

  • Specifically, this means analyzing customer interactions in order to uncover the drivers behind increased spend, positive sentiment, and referrals.
  • Machine learning is helping the chatbot to learn which activities are most likely to improve first contact resolution rates, average handle times, and customer satisfaction.
  • Combined with artificial intelligence, Amdocs SmartBot can use this learning to tailor each new customer interaction and achieve desired objectives.

Amdocs SmartBot has a 360-degree view of the customer and the context of the interaction, which enables digital service providers to provide their customers with an end-to-end experience, with the potential to grow care-to-commerce revenue opportunities by making more relevant predictive care and promotional offers to customers.

Why was Microsoft such a key component in bringing the platform to market?

Cognitive Services

 Amdocs has partnered with Microsoft Luis, a leading NLP solution provider, as the core NLP engine on top of which the CSP specific intents and the bot application are built.

The NLP engine is responsible for providing the conversational capability for the bot to understand the customer. In order to provide such information to the bot, the NLP engine will:

    • Discover Intent: Using a set of defined Telco intents and examples the NLP algorithms will return a list of possible customer intents with accuracy ranking. When receiving this list the bot Application will turn to the intent with the highest score if it exceeds a certain threshold.
      • Telco Intents: Once the NLP engine identifies the phrase and the utterances by the customer (whether via text or voice), it is essential to associate it to a Telco-specific intent. For example, when a customer says, “I have lost my device”, defining an intent for lost device and associating the intent to the dialog of “suspend customer services” is a key function of the bot. The ability to do this is accomplished by extending the horizontal NLP capabilities with industry domain and knowledge. Amdocs delivers out of the box intents, which can be further extended and added based on specific service provider requirements. The intents are built such that they can either be consumed by a bot application or by any other incoming form of customer interaction.
    • Extract Entity: The NLP is capable of identifying certain parts of the customer sentences as entities. The entities are isolated and sent to the bot Application in the context of the identified intent. The bot will use the entities as attributes that are needed to execute the dialog. Example of simple entities are: name, date, country etc. So if customer is flying to a trip the destination and dates will serve the bot in the roaming plan activation flow.

With deep understanding of the CSP world, Amdocs defines a variety of specialized entities: device, plan, bundle and more. Utilizing the different entity types in the NLP service we define groups and hierarchies to provide deeper understanding per each entity.

For example: when defining an entity of ‘device’ we can know that Samsung S8 is a device – for some intents this information is enough (e.g. intent: ‘need help with network definitions’, entity: device = Samsung S8), but we can also create groups of device types by category and include Samsung S8 in the entity of ‘top tier devices’ and apply different dialog approach when getting sales inquiry for a device of this group. So it’s all about knowing the industry and utilizing the tools based on this knowledge.

    • Determine Sentiment: Per shared customer sentence the NLP service returns a score for customer sentiment. The bot framework can use the sentiment score for decision points and to share it with other systems.
    • Detect language: Per shared customer sentence the NLP service returns the detected language that can be used in a multilingual implementation.

Are there new revenue opportunities for service providers?

With the Amdocs SmartBot, service providers can grow revenue by automatically and intelligently identifying upsell / cross-sell opportunities and providing relevant and personalized offers.

Deeply integrated with a digital service provider’s core information systems such as CRM, order management and catalog, Amdocs SmartBot has a 360-degree view of the customer and the context of the interaction, which enables digital service providers to provide their customers with an end-to-end experience, with the potential to grow care-to-commerce revenue opportunities by making more relevant predictive care and promotional offers to customers.

TENSORFLOW MACHINE LEARNING ON THE AMAZON DEEP LEARNING AMI

Lab Overview

TensorFlow is a popular framework used for machine learning. The Amazon Deep Learning AMI comes bundled with everything you need to start using TensorFlow from development through to production. In this Lab, you will develop, visualize, serve, and consume a TensorFlow machine learning model using the Amazon Deep Learning AMI.

Lab Objectives

Upon completion of this Lab you will be able to:

  • Create machine learning models in TensorFlow
  • Visualize TensorFlow graphs and the learning process in TensorBoard
  • Serve trained TensorFlow models with TensorFlow Serving
  • Create clients that consume served TensorFlow models, all with the Amazon Deep Learning AMI

Lab Prerequisites

You should be familiar with:

  • Working at the Linux command line
  • The Python programming language
  • Some linear algebra knowledge is beneficial (basic vector and matrix operations)
  • Basic understanding of neural networks is beneficial, but not required

Lab Environment

Before completing the Lab instructions, the environment will look as follows:

After completing the Lab instructions, the environment should look similar to:

Astronaut Scott Tingle controls DLR robot Justin from space

05 Mar 2018 | Source: German Aerospace Centre (DLR)

Collaboration between space and Earth

  • Test run at the DLR site in Oberpfaffenhofen: artificial intelligence for the cooperation of robot and astronaut
  • Experiment with more challenging tasks for the human-machine team in preparation for planetary exploration missions
  • Focus: Space, Artificial Intelligence, Robotics, Human Spaceflight

The operator orbits Earth at an altitude of 400 kilometres, while the assistant works on the ground. During the ‘SUPVIS Justin’ experiment, those who send and receive commands conduct a long-distance relationship. Aboard the International Space Station (ISS) on 2 March 2018, United States astronaut Scott Tingle selected the required commands on a tablet, while the robot Justin of the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) performed the necessary work on a solar panel in a terrestrial laboratory in Oberpfaffenhofen, Germany, as instructed. Engineers at the DLR Institute of Robotics and Mechatronics had fitted their robot with the necessary artificial intelligence so that it could perform these subtasks autonomously and without detailed individual commands. “The robot is clever, but the astronaut is always in control,” says Neal Lii, the DLR project manager. In August 2017, the experiment was successfully carried out for the first time as part of the METERON Project (Multi-Purpose End-to-End Robotic Operation Network), together with the European Space Agency (ESA). In the second test run, the tasks have now become more demanding for both the robot and the astronaut.

Robot with artificial intelligence

For the experiment, the robot Justin was relocated to Mars as a worker – visually at least – in order to inspect and maintain solar panels as autonomously as possible – task by task – and to provide the orbiting astronaut with constant feedback for the next stages of work. “Artificial intelligence allows the robot to perform many tasks independently, making us less susceptible to communication delays that would make continuous control more difficult at such a great distance,” Lii explains. “And we also reduce the workload of the astronaut, who can transfer tasks to the robot.” To do this, however, astronauts and robots must cooperate seamlessly and also complement one another.

Human-machine team

To begin with, scientists made the human and machine team handle a few standard tasks, which had already been practised in advance on the ground and also performed by Justin from the ISS. But subsequent assignments went well beyond mechanical tasks. The solar panels were covered with dust – this would be a problem for a planetary mission on Mars, for example, which astronauts and robots would have to overcome. The panels also were not optimally directed towards the sunlight. When operating solar panels for a Martian colony, this would soon result in the energy supply becoming weaker and weaker.

Tingle, who was viewing the working environment on the Red Planet through Justin’s eyes on his tablet, quickly realised that Justin needed to clean the panels and remove the dust. And he also had to realign the solar components. To do this, he could choose from a range of abstract commands on the tablet. “Our team closely observed how the astronaut accomplished these tasks, without being aware of these problems in advance and without any knowledge of the robot’s new capabilities,” says DLR engineer Daniel Leidner. The new tasks also posed a challenge for Justin. Instead of simply reporting whether he had fulfilled a requirement or not, as in the initial test run in August 2017, this time he and his operator had to ‘estimate’ the extent to which he had cleaned the panels, for example. In the next series of experiments in summer 2018, the German ESA astronaut Alexander Gerst will take command of Justin – and the tasks will again become slightly more complicated than before because Justin will have to select a component on behalf of the astronaut and install it on the solar panels.

The astronaut’s partner

“This is a significant step closer to a manned planetary mission with robotic support,” says Alin Albu-Schäffer, head of the DLR Institute of Robotics and Mechatronics. In such a future, an astronaut would orbit a celestial body – from where he or she would command and control a team of robots fitted with artificial intelligence on its surface. “The astronaut would therefore not be exposed to the risk of landing, and we could use more robotic assistants to build and maintain infrastructure, for example, with limited human resources.” In this scenario, the robot would no longer simply be the extended arm of the astronaut: “It would be more like a partner on the ground.

Sumber : http://www.research-in-germany.org/news/2018/3/2018-03-05_Astronaut_Scott_Tingle_controls_DLR_robot_Justin_from_space

Google’s Hinton Outlines New AI Advance That Requires Less Data

By Alastair Sharp

TORONTO (Reuters) – Google’s Geoffrey Hinton, an artificial intelligence pioneer, on Thursday outlined an advance in the technology that improves the rate at which computers correctly identify images and with reliance on less data.

Hinton, an academic whose previous work on artificial neural networks is considered foundational to the commercialization of machine learning, detailed the approach, known as capsule networks, in two research papers posted anonymously on academic websites last week.

The approach could mean computers learn to identify a photograph of a face taken from a different angle from those it had in its bank of known images. It could also be applied to speech and video recognition.

“This is a much more robust way of identifying objects,” Hinton told attendees at the Go North technology summit hosted by Alphabet Inc’s Google, detailing proof of a thesis he had first theorized in 1979.

In the work with Google researchers Sara Sabour and Nicholas Frost, individual capsules – small groups of virtual neurons – were instructed to identify parts of a larger whole and the fixed relationships between them.

The system then confirmed whether those same features were present in images the system had never seen before.

Artificial neural networks mimic the behavior of neurons to enable computers to operate more like the human brain.

Hinton said early testing of the technique had come up with half the errors of current image recognition techniques.

The bundling of neurons working together to determine both whether a feature is present and its characteristics also means the system should require less data to make its predictions.

“The hope is that maybe we might require less data to learn good classifiers of objects, because they have this ability of generalizing to unseen perspectives or configurations of images,” said Hugo Larochelle, who heads Google Brain’s research efforts in Montreal.

“That’s a big problem right now that machine learning and deep learning needs to address, these methods right now require a lot of data to work,” he said.

Hinton likened the advance to work two of his students developed in 2009 on speech recognition using neural networks that improved on existing technology and was incorporated into the Android operating system in 2012.

Still, he cautioned it was early days.

“This is just a theory,” he said. “It worked quite impressively on a small dataset” but now needs to be tested on larger datasets, he added.

Peer review of the findings is expected in December.

(Reporting by Alastair Sharp; Editing by Andrew Hay)

Copyright 2017 Thomson Reuters.

Sumber : https://www.usnews.com/news/technology/articles/2017-11-02/googles-hinton-outlines-new-ai-advance-that-requires-less-data