Tech News

Unlocking the Value of Values with AI

Investigating AI's role in human digital experiences

In recent years, there have been multiple research reports verifying that Australia is falling behind the rest of the world when it comes to AI adoption. Research by Deloitte found that Australia is below the global average both in terms of the proportion of companies with a comprehensive AI strategy (35% globally) and in terms of the percentage of companies that are seasoned AI adopters (17%, compared to 21% globally).

This has been turned on its head now, as we grapple with the pandemic and businesses are turning to AI and robotics to help compensate for the loss of revenue and layoffs. However, it is likely not the technology itself that will make the difference in the aftermath, but rather the knowledge and creativity of the humans who use it, and as we look to ensure that AI is deployed in a responsible way.

In an attempt to quantify the potential of AI as applied to the web, WP Engine commissioned an international study conducted by the researchers at The University of London and Vanson Bourne that explored the present and near future of AI.

Through our research, and from conversations with experts, we’ve listed a few recommended solutions below for you to gain a competitive edge through the deployment of AI-enabled tools.

Open your algorithms

AI-driven platforms are increasingly being used to make long lasting decisions, such as the creation of jobs, loans, or university admissions. Thus, there is a rising concern from users on how algorithms make these decisions, as 93% of Australian consumers feel it is very important for organisations to be transparent about how businesses are using their data to create personalised digital experiences.

The combined storm of COVID-19, economic downturn, and global unease are set to accelerate these conversations. AI is already being tested to identify COVID-19 patients who will need intensive care before their condition deteriorates. Hospitals in China are using AI to diagnose pneumonia in COVID-19 cases, potentially saving countless lives. This data is being openly shared with scientists around the world to help speed the ability to come up with a vaccine.

Businesses should look at the open source community, like the one where WordPress has flourished, which provides huge benefits to building transparency and trust in the relationship between consumers and system designers. This doesn’t mean giving away your IP or the code your teams have invested time and money into; it means explaining to your customer how it works and even offering a chance for them to be involved.

Opening up AI algorithms with a similar level of transparency that open source communities like WordPress have experienced in website development strengthens trust among a consumer base, and it makes visible how the system is arriving at its conclusions and recommendations.

Don’t be evil

Our research found that 57% of Australian IT decision-makers believe their organisations use AI to make a positive impact on the world—but what does this look like in practice, since AI-based services are increasingly owned by a small subset of global organisations? Google’s AI initiative, Project Maven, which was harnessed by the U.S. military, makes it clear that AI services can potentially be used as weapons. Initial discussions around the partnership raised the question—should your AI-based ethics extend beyond your own internal use cases and evaluation to that of your client’s use cases and applications?

COVID-19 has brought up privacy concerns through facial recognition software. Russia and China are using AI facial recognition and heat sensors to track people with fevers. This could be very helpful in stemming the spread but also conjures deep-rooted issues around data collection and online storage, user consent, and mass surveillance.

Particularly in this report, we see agencies increasingly using AI to create meaningful and engaging digital experiences. As agencies become more involved in the implementation of AI solutions, we’re likely to see agencies develop entire offerings, as foundational as corporate narratives and brand audits, that help corporations clarify their underlying set of values and safe-use restrictions to guide AI implementations.

Beware historical data sets

Many organisations have significant challenges with legacy systems and historical data structures. In this case, it is very handy to use algorithms to identify data of interest, since AI approaches can support the identification of frequently used data.

The issue of legacy data recently came to the forefront when Amazon’s HR algorithm used 10 years of historical hiring practices to consider new candidates, and revealed a bias against female candidates. Amazon quickly shut down the tool. While it seems antithetical, this experience should have been viewed as an opportunity. Revealing patterns of bias in historical data when attempting to use that data to formulate future-focused actions can inform strategic decision making, potentially changing how you operate based on flaws and patterns developed over time.

With a black swan event like COVID-19, machine learning which works by identifying patterns in historical training data to model AI systems yielded results that in some cases were completely invalid. This happened with a UK supermarket where humans needed to step in to put a break on online ordering to allow the company to catch up with the surge in demand which was similar to a DDOS attack.

These examples reinforce the need for organisations to interrogate bias from within (91.5% of Australian IT decision-makers agreed this is important) and in historical datasets (AU 91.5%). It is particularly relevant for supervised learning and machine learning data sets where the process of supervision allows us to change the way we do things. It is a chance to create diverse and inclusive teams to maintain various voices of change and to better manage through the crisis as well.

Use only what you need

Consumers do not want organisations tracking data that they don’t have any use for.

In the COVID-19 crisis, China mobilised its mass surveillance tools, from drones to CCTV cameras, to monitor quarantined people and track the spread of the coronavirus as have other nations. But privacy experts raised concerns about how governments were using the data, how long they’ll keep it and who has access to it.

Even with this debate, being clear about data collection and use—current or future—is central. Purposefulness and intention in design are central. Data should not be collected or applied without a reasonable use in mind.

Ultimately, what I hope is clear from the study is that it is not the technology itself that will make the difference in business, in government, in this crisis or the next. Rather, it’s the knowledge, creativity and values of the humans who use it.

Show More

Mark Randall

Mark is a 20-year-old veteran in the technology space and now leads MarTech company, WP Engine.
Back to top button