UX AI Transformation: Automation: Redefining User Experiences
This post covers one of four transformations and is part of a larger blog series titled 'From Hype to Reality: A Practical Guide to AI-driven UX in SaaS'. You can find the first post of the series here.
AI Automation in SaaS: Redefining End-User Experiences
From elevators to ATMs, technology has long automated our experiences. But in recent years, SaaS (Software as a Service) has taken automation to a whole new level, thanks to AI.
Everywhere you look, there's AI streamlining tasks:
E-commerce: Shopify's Generative AI redesigns websites based on sales data.
Customer Service: Salesforce uses AI to classify cases and predict customer needs.
Marketing: HubSpot's AI optimizes content creation and customer segmentation.
Sales: LinkedIn Sales Navigator automates lead scoring and provides insights.
HR: Workday automates talent acquisition and payroll processing.
IT Operations: ServiceNow uses AI to categorize and respond to incidents.
Project Management: Asana's AI prioritizes tasks and schedules meetings.
The Impact of AI Automation: Shifting User Perceptions and Concerns
AI automation is changing the game, making experiences not only more efficient and personalized but also user are more satisfied. This research report states ‘there is a direct impact between the automation of tasks carried out by organizations and the satisfaction perceived by the user’
But with the perks come worries: Will automation cost jobs? Lead to loss of control? Degrade skills? And what about privacy and bias? These concerns aren't baseless—there are plenty of examples of bias in AI: from Workdays recruitment tools (in which there is an ongoing courtcase) and remember when amazon had to close down their biased AI recruitment tool), Google had to apologize for what it describes as “inaccuracies in some historical image generation depictions” with its Gemini AI tool. And we all have experiences the skill degradation, whether its doing math sums without a calculator or navigating around the city without google maps
Trust in the black box of automation is shaky. WIth little or no visibility into how AI decisions and results have been made, people see it as less fair that decision made by humans (as highlighted in the research In AI we trust? Perceptions about automated decision-making by artificial intelligence)
Designing with Automation: Addressing User Trust and Control
Here's the challenge: AI needs to explain itself. One of the critical and pervasive design issues of AI systems is their explainability… into AI’s functions and decisions. ’ as stated in Question-Driven Design Process for Explainable AI User Experiences.
We need Explainable AI (XAI) that users can understand and trust. Designing XAI means treating it as an interaction problem, not just a technical one.
We have to keep users in the driver's seat, empowering them with AI instead of letting it take over. Just like a navigator app guides you but doesn't decide where you go, AI should empower users without taking away their control.