Two Ways to Create Custom Translated Messaging for HTML Forms

Featured Imgs 23

HTML forms come with built-in ways to validate form inputs and other controls against predefined rules such as making an input required, setting min and max constraints on range sliders, or establishing a pattern on an email input to check for proper formatting. Native HTML and browsers give us a lot of “free” features that don’t require fancy scripts to validate form submissions.

And if something doesn’t properly validate? We get “free” error messaging to display to the person using the form.

These are usually good enough to get the job done, but we may need to override these messages if we need more specific error content — especially if we need to handle translated content across browsers. Here’s how that works.

The Constraints API

The Constraints API is used to override the default HTML form validation messages and allows us to define our own error messages. Chris Ferdinandi even covered it here on CSS-Tricks in great detail.

In short, the Constraints API is designed to provide control over input elements. The API can be called at individual input elements or directly from the form element.

For example, let’s say this simple form input is what we’re working with:

<form id="myForm">
  <label for="fullName">Full Name</label>
  <input type="text" id="fullName" name="fullName" placeholder="Enter your full name" required>
  <button id="btn" type="submit">Submit</button>
</form>

We can set our own error message by grabbing the <input> element and calling the setCustomValidity() method on it before passing it a custom message:

const fullNameInput = document.getElementById("fullName");
fullNameInput.setCustomValidity("This is a custom error message");

When the submit button is clicked, the specified message will show up in place of the default one.

A form field labeled "Full Name" with an input box to enter the name, a "Submit" button, and a displayed custom error message: "This is a custom error message" with a warning icon.

Translating custom form validation messages

One major use case for customizing error messages is to better handle internationalization. There are two main ways we can approach this. There are other ways to accomplish this, but what I’m covering here is what I believe to be the most straightforward of the bunch.

Method 1: Leverage the browser’s language setting

The first method is using the browser language setting. We can get the language setting from the browser and then check whether or not we support that language. If we support the language, then we can return the translated message. And if we do not support that specific language, we provide a fallback response.

Continuing with the HTML from before, we’ll create a translation object to hold your preferred languages (within the script tags). In this case, the object supports English, Swahili, and Arabic.

const translations = {
  en: {
    required: "Please fill this",
    email: "Please enter a valid email address",
 
  },
  sw: {
    required: "Sehemu hii inahitajika",
    email: "Tafadhali ingiza anwani sahihi ya barua pepe",
  },
  ar: {
    required: "هذه الخانة مطلوبه",
    email: "يرجى إدخال عنوان بريد إلكتروني صالح",
  }
};

Next, we need to extract the object’s labels and match them against the browser’s language.

// the translations object
const supportedLangs = Object.keys(translations);
const getUserLang = () => {
  // split to get the first part, browser is usually en-US
  const browserLang = navigator.language.split('-')[0];
  return supportedLangs.includes(browserLang) ? browserLang :'en';
};

// translated error messages
const errorMsgs = translations[getUserLang()];// form element
const form = document.getElementById("myForm");// button elementconst btn = document.getElementById("btn");// name input
const fullNameInput = document.getElementById("fullName");// wrapper for error messaging
const errorSpan = document.getElementById("error-span");

// when the button is clicked…
btn.addEventListener("click", function (event) {  // if the name input is not there…
  if (!fullNameInput.value) {    // …throw an error
    fullNameInput.setCustomValidity(errorMsgs.required);    // set an .error class on the input for styling
    fullNameInput.classList.add("error");
  }
});

Here the getUserLang() function does the comparison and returns the supported browser language or a fallback in English. Run the example and the custom error message should display when the button is clicked.

A form field labeled "Full Name" with a placeholder text "Enter your full name" is outlined in red with an error message saying "Please fill this" below it. There is also a "Submit" button next to the field.

Method 2: Setting a preferred language in local storage

A second way to go about this is with user-defined language settings in localStorage. In other words, we ask the person to first select their preferred language from a <select> element containing selectable <option> tags. Once a selection is made, we save their preference to localStorage so we can reference it.

<label for="languageSelect">Choose Language:</label>
<select id="languageSelect">
  <option value="en">English</option>
  <option value="sw">Swahili</option>
  <option value="ar">Arabic</option>
</select>

<form id="myForm">
  <label for="fullName">Full Name</label>
  <input type="text" id="fullName" name="fullName" placeholder="Enter your full name" required>
  <span id="error-span"></span>
  <button id="btn" type="submit">Submit</button>
</form>

With the <select> in place, we can create a script that checks localStorage and uses the saved preference to return a translated custom validation message:

// the <select> element
const languageSelect = document.getElementById("languageSelect");
// the <form> element
const form = document.getElementById("myForm");
// the button element
const btn = document.getElementById("btn");
// the name input
const fullNameInput = document.getElementById("fullName");
const errorSpan = document.getElementById("error-span");
// translated custom messages
const translations = {
  en: {
    required: "Please fill this",
    email: "Please enter a valid email address",
  },
  sw: {
    required: "Sehemu hii inahitajika",
    email: "Tafadhali ingiza anwani sahihi ya barua pepe",
  },
  ar: {
    required: "هذه الخانة مطلوبه",
    email: "يرجى إدخال عنوان بريد إلكتروني صالح",
  }
};
// the supported translations object
const supportedLangs = Object.keys(translations);
// get the language preferences from localStorage
const getUserLang = () => {
  const savedLang = localStorage.getItem("preferredLanguage");
  if (savedLang) return savedLang;

  // provide a fallback message
  const browserLang = navigator.language.split('-')[0];
  return supportedLangs.includes(browserLang) ? browserLang : 'en';
};

// set initial language
languageSelect.value = getUserLang();

// update local storage when user selects a new language
languageSelect.addEventListener("change", () => {
  localStorage.setItem("preferredLanguage", languageSelect.value);
});
// on button click
btn.addEventListener("click", function (event) {
  // take the translations
  const errorMsgs = translations[languageSelect.value];
  // ...and if there is no value in the name input
  if (!fullNameInput.value) {
    // ...trigger the translated custom validation message
    fullNameInput.setCustomValidity(errorMsgs.required);
    // set an .error class on the input for styling
    fullNameInput.classList.add("error");
  }
});

The script sets the initial value to the currently selected option, saves that value to localStorage, and then retrieves it from localStorage as needed. Meanwhile, the script updates the selected option on every change event fired by the <select> element, all the while maintaining the original fallback to ensure a good user experience.

A web form with a language selector set to Arabic, a text field for "Full Name," a "Submit" button, and an error message in Arabic that translates to "This field is required."

If we open up DevTools, we’ll see that the person’s preferred value is available in localStorage when a language preference is selected.

A screenshot of the Application tab in the Chrome DevTools interface. It shows the Storage section with "Local storage" for "http://127.0.0.1:5500" highlighted, and a key-value pair where the key is "preferredLanguage" and the value is "ar".

Wrapping up

And with that, we’re done! I hope this quick little tip helps out. I know I wish I had it a while back when I was figuring out how to use the Constraints API. It’s one of those things on the web you know is possible, but exactly how can be tough to find.

References


Two Ways to Create Custom Translated Messaging for HTML Forms originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.



from CSS-Tricks https://ift.tt/MAR9vpu
Gain $200 in a week
via Read more

Why Anticipatory Design Isn’t Working For Businesses

Featured Imgs 23

Consider the early days of the internet, when websites like NBC News and Amazon cluttered their pages with flashing banners and labyrinthine menus. In the early 2000s, Steve Krug’s book Don’t Make Me Think arrived like a lighthouse in a storm, advocating for simplicity and user-centric design.

Today’s digital world is flooded with choices, information, and data, which is both exciting and overwhelming. Unlike Krug’s time, today, the problem isn’t interaction complexity but opacity. AI-powered solutions often lack transparency and explainability, raising concerns about user trust and accountability. The era of click-and-command is fading, giving way to a more seamless and intelligent relationship between humans and machines.

Expanding on Krug’s Call for Clarity: The Pillars of Anticipatory Design

Krug’s emphasis on clarity in design is more relevant than ever. In anticipatory design, clarity is not just about simplicity or ease of use — it’s about transparency and accountability. These two pillars are crucial but often missing as businesses navigate this new paradigm. Users today find themselves in a digital landscape that is not only confusing but increasingly intrusive. AI predicts their desires based on past behavior but rarely explains how these predictions are made, leading to growing mistrust.

Transparency is the foundation of clarity. It involves openly communicating how AI-driven decisions are made, what data is being collected, and how it is being used to anticipate needs. By demystifying these processes, designers can alleviate user concerns about privacy and control, thereby building trust.

Accountability complements transparency by ensuring that anticipatory systems are designed with ethical considerations in mind. This means creating mechanisms for users to understand, question, and override automated decisions if needed. When users feel that the system is accountable to them, their trust in the technology — and the brand — deepens.

What Makes a Service Anticipatory?

Image AI like a waiter at a restaurant. Without AI, they wait for you to interact with them and place your order. But with anticipatory design powered by AI and ML, the waiter can analyze your past orders (historical data) and current behavior (contextual data) — perhaps, by noticing you always start with a glass of sparkling water.

This proactive approach has evolved since the late 1990s, with early examples like Amazon’s recommendation engine and TiVo’s predictive recording. These pioneering services demonstrated the potential of predictive analytics and ML to create personalized, seamless user experiences.

Amazon’s Recommendation Engine (Late 1990s)

Amazon was a pioneer in using data to predict and suggest products to customers, setting the standard for personalized experiences in e-commerce.

TiVo (1999)

TiVo’s ability to learn users’ viewing habits and automatically record shows marked an early step toward predictive, personalized entertainment.

Netflix’s Recommendation System (2006)

Netflix began offering personalized movie recommendations based on user ratings and viewing history in 2006. It helped popularize the idea of anticipatory design in the digital entertainment space.

How Businesses Can Achieve Anticipatory Design

Designing for anticipation is designing for a future that is not here yet but has already started moving toward us.

Designing for anticipation involves more than reacting to current trends; it requires businesses to plan strategically for future user needs. Two critical concepts in this process are forecasting and backcasting.

  • Forecasting analyzes past trends and data to predict future outcomes, helping businesses anticipate user needs.
  • Backcasting starts with a desired future outcome and works backward to identify the steps needed to achieve that goal.

Think of it like planning a dream vacation. Forecasting would involve looking at your past trips to guess where you might go next. But backcasting lets you pick your ideal destination first, then plan the perfect itinerary to get you there.

Forecasting: A Core Concept for Future-Oriented Design

This method helps in planning and decision-making based on probable future scenarios. Consider Netflix, which uses forecasting to analyze viewers’ past viewing habits and predict what they might want to watch next. By leveraging data from millions of users, Netflix can anticipate individual preferences and serve personalized recommendations that keep users engaged and satisfied.

Backcasting: Planning From the Desired Future

Backcasting takes a different approach. Instead of using data to predict the future, it starts with defining a desired future outcome — a clear user intent. The process then works backward to identify the steps needed to achieve that goal. This goal-oriented approach crafts an experience that actively guides users toward their desired future state.

For instance, a financial planning app might start with a user’s long-term financial goal, such as saving for retirement, and then design an experience that guides the user through each step necessary to reach that goal, from budgeting tips to investment recommendations.

Integrating Forecasting and Backcasting In Anticipatory Design

The true power of anticipatory design emerges when businesses efficiently integrate both forecasting and backcasting into their design processes.

For example, Tesla’s approach to electric vehicles exemplifies this integration. By forecasting market trends and user preferences, Tesla can introduce features that appeal to users today. Simultaneously, by backcasting from a vision of a sustainable future, Tesla designs its vehicles and infrastructure to guide society toward a world where electric cars are the norm and carbon emissions are significantly reduced.

Over-Promising and Under-Delivering: The Pitfalls of Anticipatory Design

As businesses increasingly adopt anticipatory design, the integration of forecasting and backcasting becomes essential. Forecasting allows businesses to predict and respond to immediate user needs, while backcasting ensures these responses align with long-term goals. Despite its potential, anticipatory design often fails in execution, leaving few examples of success.

Over the past decade, I’ve observed and documented the rise and fall of several ambitious anticipatory design ventures. Among them, three — Digit, LifeBEAM Vi Sense Headphones, and Mint — highlight the challenges of this approach.

Digit: Struggling with Contextual Understanding

Digit aimed to simplify personal finance with algorithms that automatically saved money based on user spending. However, the service often missed the mark, lacking the contextual awareness necessary to accurately assess users’ real-time financial situations. This led to unexpected withdrawals, frustrating users, especially those living paycheck to paycheck. The result was a breakdown in trust, with the service feeling more intrusive than supportive.

LifeBEAM Vi Sense Headphones: Complexity and User Experience Challenges

LifeBEAM Vi Sense Headphones was marketed as an AI-driven fitness coach, promising personalized guidance during workouts. In practice, the AI struggled to deliver tailored coaching, offering generic and unresponsive advice. As a result, users found the experience difficult to navigate, ultimately limiting the product’s appeal and effectiveness. This disconnection between the promised personalized experience and the actual user experience left many disappointed.

Mint: Misalignment with User Goals

Mint aimed to empower users to manage their finances by providing automated budgeting tools and financial advice. While the service had the potential to anticipate user needs, users often found that the suggestions were not tailored to their unique financial situations, resulting in generic advice that did not align with their personal goals.

The lack of personalized, actionable steps led to a mismatch between user expectations and service delivery. This misalignment caused some users to disengage, feeling that Mint was not fully attuned to their unique financial journeys.

The Risks of Over-promising and Under-Delivering

The stories of Digit, LifeBEAM Vi Sense, and Mint underscore a common pitfall: over-promising and under-delivering. These services focused too much on predictive power and not enough on user experience. When anticipatory systems fail to consider individual nuances, they breed frustration rather than satisfaction, highlighting the importance of aligning design with human experience.

Digit’s approach to automated savings, for instance, became problematic when users found its decisions opaque and unpredictable. Similarly, LifeBEAM’s Vi Sense headphones struggled to meet diverse user needs, while Mint’s rigid tools failed to offer the personalized insights users expected. These examples illustrate the delicate balance anticipatory design must strike between proactive assistance and user control.

Failure to Evolve with User Needs

Many anticipatory services rely heavily on data-driven forecasting, but predictions can fall short without understanding the broader user context. Mint initially provided value with basic budgeting tools but failed to evolve with users’ growing needs for more sophisticated financial advice. Digit, too, struggled to adapt to different financial habits, leading to dissatisfaction and limited success.

Complexity and Usability Issues

Balancing the complexity of predictive systems with usability and transparency is a key challenge in anticipatory design.

When systems become overly complex, as seen with LifeBEAM Vi Sense headphones, users may find them difficult to navigate or control, compromising trust and engagement. Mint’s generic recommendations, born from a failure to align immediate user needs with long-term goals, further illustrate the risks of complexity without clarity.

Privacy and Trust Issues

Trust is critical in anticipatory design, particularly in services handling sensitive data like finance or health. Digit and Mint both encountered trust issues as users grew skeptical of how decisions were made and whether these services truly had their best interests in mind. Without clear communication and control, even the most sophisticated systems risk alienating users.

Inadequate Handling of Edge Cases and Unpredictable Scenarios

While forecasting and backcasting work well for common scenarios, they can struggle with edge cases or unpredictable user behaviors. If an anticipatory service can’t handle these effectively, it risks providing a poor user experience and, in the worst-case scenario, harming the user. Anticipatory systems must be prepared to handle edge cases and unpredictable scenarios.

LifeBEAM Vi Sense headphones struggled when users deviated from expected fitness routines, offering a one-size-fits-all experience that failed to adapt to individual needs. This highlights the importance of allowing users control, even when a system proactively assists them.

Designing for Anticipatory Experiences
Anticipatory design should empower users to achieve their goals, not just automate tasks.

We can follow a layered approach to plan a service that can evolve according to user actions and explicit ever-evolving intent.

But how do we design for intent without misaligning anticipation and user control or mismatching user expectations and service delivery?

At the core of this approach is intent — the primary purpose or goal that the design must achieve. Surrounding this are workflows, which represent the structured tasks to achieve the intent. Finally, algorithms analyze user data and optimize these workflows.

For instance, Thrive (see the image below), a digital wellness platform, aligns algorithms and workflows with the core intent of improving well-being. By anticipating user needs and offering personalized programs, Thrive helps users achieve sustained behavior change.

It perfectly exemplifies the three-layered concentric representation for achieving behavior change through anticipatory design:

1. Innermost layer: Intent

Improve overall well-being: Thrive’s core intent is to help users achieve a healthier and more fulfilling life. This encompasses aspects like managing stress, improving sleep quality, and boosting energy levels.

2. Middle layer: Workflows

Personalized programs and support: Thrive uses user data (sleep patterns, activity levels, mood) to create programs tailored to their specific needs and goals. These programs involve various workflows, such as:

  • Guided meditations and breathing exercises to manage stress and anxiety.
  • Personalized sleep routines aimed at improving sleep quality.
  • Educational content and coaching tips to promote healthy habits and lifestyle changes.

3. Outermost layer: Algorithms

Data analysis and personalized recommendations: Thrive utilizes algorithms to analyze user data and generate actionable insights. These algorithms perform tasks like the following:

  • Identify patterns in sleep, activity, and mood to understand user challenges.
  • Predict user behavior to recommend interventions that address potential issues.
  • Optimize program recommendations based on user progress and data analysis.

By aligning algorithms and workflows with the core intent of improving well-being, Thrive provides a personalized and proactive approach to behavior change. Here’s how it benefits users:

  • Sustained behavior change: Personalized programs and ongoing support empower users to develop healthy habits for the long term.
  • Data-driven insights: User data analysis helps users gain valuable insights into their well-being and identify areas for improvement.
  • Proactive support: Anticipates potential issues and recommends interventions before problems arise.
The Future of Anticipatory Design: Combining Anticipation with Foresight

Anticipatory design is inherently future-oriented, making it both appealing and challenging. To succeed, businesses must combine anticipation — predicting future needs — with foresight, a systematic approach to analyzing and preparing for future changes.

Foresight involves considering alternative future scenarios and making informed decisions to navigate toward desired outcomes. For example, Digit and Mint struggled because they didn’t adequately handle edge cases or unpredictable scenarios, a failure in their foresight strategy (see an image below).

As mentioned, while forecasting and backcasting work well for common scenarios, they can struggle with edge cases or unpredictable user behaviors. Under anticipatory design, if we demote foresight for a second plan, the business will fail to account for and prepare for emerging trends and disruptive changes. Strategic foresight helps companies to prepare for the future and develop strategies to address possible challenges and opportunities.

The Foresight process generally involves interrelated activities, including data research, trend analysis, planning scenarios, and impact assessment. The ultimate goal is to gain a broader and deeper understanding of the future to make more informed and strategic decisions in the design process and foresee possible frictions and pitfalls in the user experience.

Actionable Insights for Designer

  • Enhance contextual awareness
    Help data scientists or engineers to ensure that the anticipatory systems can understand and respond to the full context of user needs, not just historical data. Plan for pitfalls so you can design safety measures where the user can control the system.
  • Maintain user control
    Provide users with options to customize or override automated decisions, ensuring they feel in control of their experiences.
  • Align short-term predictions with long-term goals
    Use forecasting and backcasting to create a balanced approach that meets immediate needs while guiding users toward their long-term objectives.
Proposing an Anticipatory Design Framework

Predicting the future is no easy task. However, design can borrow foresight techniques to imagine, anticipate, and shape a future where technology seamlessly integrates with users evolving needs. To effectively implement anticipatory design, it’s essential to balance human control with AI automation. Here’s a 3-step approach to integrate future thinking into your workflow:

  1. Anticipate Directions of Change
    Identify major trends shaping the future.
  2. Imagine Alternative Scenarios
    Explore potential futures to guide impactful design decisions.
  3. Shape Our Choices
    Leverage these scenarios to align design with user needs and long-term goals.

This proposed framework (see an image above) aims to integrate forecasting and backcasting while emphasizing user intent, transparency, and continuous improvement, ensuring that businesses create experiences that are both predictive and deeply aligned with user needs.

Step 1: Anticipate Directions of Change

Objective: Identify the major trends and forces shaping the future landscape.

Components:

1. Understand the User’s Intent

  • User Research: Conduct in-depth user research through interviews, surveys, and observations to uncover user goals, motivations, pain points, and long-term aspirations or Jobs-to-be-Done (JTBD). This foundational step helps clearly define the user’s intent.
  • Persona Development: Develop detailed user personas that represent the target audience, including their long-term goals and desired outcomes. Prioritize understanding how the service can adapt in real-time to changing user needs, offering recommendations, or taking actions aligned with the persona’s current context.

2. Forecasting: Predicting Near-Term User Needs

  • Data Collection and Analysis: Collaborate closely with data scientists and data engineers to analyze historical data (past interactions), user behavior, and external factors. This collaboration ensures that predictive analytics enhance overall user experience, allowing designers to better understand the implications of data on user behaviors.
  • Predictive Modeling: Implement continuous learning algorithms that refine predictions over time. Regularly assess how these models evolve, adapting to users’ changing needs and circumstances.
  • Explore the Delphi Method: This is a structured communication technique that gathers expert opinions to reach a consensus on future developments. It’s particularly useful for exploring complex issues with uncertain outcomes. Use the Delphi Method to gather insights from industry experts, user researchers, and stakeholders about future user needs and the best strategies to meet those needs. The consensus achieved can help in clearly defining the long-term goals and desired outcomes.

Activities:

  • Conduct interviews and workshops with experts using the Delphi Method to validate key trends.
  • Analyze data and trends to forecast future directions.

Step 2: Imagine Alternative Scenarios

Objective: Explore a range of potential futures based on these changing directions.

Components:

1. Scenario Planning

  • Scenario Development: It involves creating detailed, plausible future scenarios based on various external factors, such as technological advancements, social trends, and economic changes. Develop multiple future scenarios that represent different possible user contexts and their impact on their needs.
  • Scenario Analysis: From these scenarios, you can outline the long-term goals that users might have in each scenario and design services that anticipate and address these needs. Assess how these scenarios impact user needs and experiences.

2. Backcasting: Designing from the Desired Future

  • Define Desired Outcomes: Clearly outline the long-term goals or future states that users aim to achieve. Use backcasting to reduce cognitive load by designing a service that anticipates future needs, streamlining user interactions, and minimizing decision-making efforts.
    • Use Visioning Planning: This is a creative process that involves imagining the ideal future state you want to achieve. It helps in setting clear, long-term goals by focusing on the desired outcomes rather than current constraints. Facilitate workshops or brainstorming sessions with stakeholders to co-create a vision of the future. Define what success looks like from the user’s perspective and use this vision to guide the backcasting process.
  • Identify Steps to Reach Goals: Reverse-engineer the user journey by starting from the desired future state and working backward. Identify the necessary steps and milestones and ensure these are communicated transparently to users, allowing them control over their experience.
  • Create Roadmaps: Develop detailed roadmaps that outline the sequence of actions needed to transition from the current state to the desired future state. These roadmaps should anticipate obstacles, respect privacy, and avoid manipulative behaviors, empowering users rather than overwhelming them.

Activities:

  • Develop and analyze alternative scenarios to explore various potential futures.
  • Use backcasting to create actionable roadmaps from these scenarios, ensuring they align with long-term goals.

Step 3: Shape Our Choices

Objective: Leverage these scenarios to spark new ideas and guide impactful design decisions.

Components:

1. Integrate into the Human-Centered Design Process

  • Iterative Design with Forecasting and Backcasting: Embed insights from forecasting and backcasting into every stage of the design process. Use these insights to inform user research, prototype development, and usability testing, ensuring that solutions address both predicted future needs and desired outcomes. Continuously refine designs based on user feedback.
  • Agile Methodologies: Adopt agile development practices to remain flexible and responsive. Ensure that the service continuously learns from user interactions and feedback, refining its predictions and improving its ability to anticipate needs.

2. Implement and Monitor: Ensuring Ongoing Relevance

  • User Feedback Loops: Establish continuous feedback mechanisms to refine predictive models and workflows. Use this feedback to adjust forecasts and backcasted plans as necessary, keeping the design aligned with evolving user expectations.
  • Automation Tools: Collaborate with data scientists and engineers to deploy automation tools that execute workflows and monitor progress toward goals. These tools should adapt based on new data, evolving alongside user behavior and emerging trends.
  • Performance Metrics: Define key performance indicators (KPIs) to measure the effectiveness, accuracy, and quality of the anticipatory experience. Regularly review these metrics to ensure that the system remains aligned with intended outcomes.
  • Continuous Improvement: Maintain a cycle of continuous improvement where the system learns from each interaction, refining its predictions and recommendations over time to stay relevant and useful.
    • Use Trend Analysis: This involves identifying and analyzing patterns in data over time to predict future developments. This method helps you understand the direction in which user behaviors, technologies, and market conditions are heading. Use trend analysis to identify emerging trends that could influence user needs in the future. This will inform the desired outcomes by highlighting what users might require or expect from a service as these trends evolve.

Activities:

  • Implement design solutions based on scenario insights and iterate based on user feedback.
  • Regularly review and adjust designs using performance metrics and continuous improvement practices.
Conclusion: Navigating the Future of Anticipatory Design

Anticipatory design holds immense potential to revolutionize user experiences by predicting and fulfilling needs before they are even articulated. However, as seen in the examples discussed, the gap between expectation and execution can lead to user dissatisfaction and erode trust.

To navigate the future of anticipatory design successfully, businesses must prioritize transparency, accountability, and user empowerment. By enhancing contextual awareness, maintaining user control, and aligning short-term predictions with long-term goals, companies can create experiences that are not only innovative but also deeply resonant with their users’ needs.

Moreover, combining anticipation with foresight allows businesses to prepare for a range of future scenarios, ensuring that their designs remain relevant and effective even as circumstances change. The proposed 3-step framework — anticipating directions of change, imagining alternative scenarios, and shaping our choices — provides a practical roadmap for integrating these principles into the design process.

As we move forward, the challenge will be to balance the power of AI with the human need for clarity, control, and trust. By doing so, businesses can fulfill the promise of anticipatory design, creating products and services that are not only efficient and personalized but also ethical and user-centric.

In the end,

The success of anticipatory design will depend on its ability to enhance, rather than replace, the human experience.

It is a tool to empower users, not to dictate their choices. When done right, anticipatory design can lead to a future where technology seamlessly integrates with our lives, making everyday experiences simpler, more intuitive, and ultimately more satisfying.



Gain $200 in a week
from Articles on Smashing Magazine — For Web Designers And Developers https://ift.tt/7Wt509u

CSSWG Minutes Telecon (2024-08-21)

Category Image 076

View Transitions are one of the most awesome features CSS has shipped in recent times. Its title is self-explanatory: transitions between views are possible with just CSS, even across pages of the same origin! What’s more interesting is its subtext, since there is no need to create complex SPA with routing just to get those eye-catching transitions between pages.

What also makes View Transitions amazing is how quickly it has gone from its first public draft back in October 2022 to shipping in browsers and even in some production contexts like Airbnb — something that doesn’t happen to every feature coming to CSS, so it shows how rightfully hyped it is.

That said, the API is still new, so it’s bound to have some edge cases or bugs being solved as they come. An interesting way to keep up with the latest developments about CSS features like View Transitions is directly from the CSS Telecom Minutes (you can subscribe to them at W3C.org).


View Transitions were the primary focus at the August 21 meeting, which had a long agenda to address. It started with a light bug in Chrome regarding the navigation descriptor, used in every cross-document view transition to opt-in to a view transition.

@view-transition {
  navigation: auto | none;
}

Currently, the specs define navigation as an enum type (a set of predefined types), but Blink takes it as a CSSOMString (any string). While this initially was passed as a bug, it’s interesting to see the conversation it sparked on the GitHub Issue:

Actually I think this is debatable, we don’t currently have at rules that use enums in that way, and usually CSSOM doesn’t try to be fully type-safe in this way. e.g. if we add new navigation types and some browsers don’t support them, this would interpret them as invalid rules rather than rules with empty navigation.

The last statement may not look exciting, but it opens the possibility of new navigation types beyond auto and none, so think about what a different type of view transition could do.

And then onto the CSSWG Minutes:

emilio: Is it useful to differentiate between missing auto or none?

noamr: Yes, very important for forward compat. If one browser adds another type that others don’t have yet, then we want to see that there’s a difference between none or invalid

emilio: But then you get auto behavior?

noamr: No, the unknown value is not read for purpose of nav. It’s a vt role without navigation descriptor and no initial value Similar to having invalid rule

So in future implementations, an invalid navigation descriptor will be ignored, but exactly how is still under debate:

ntim: How is it different from navigation none?

noamr: Auto vs invalid and then auto vs none. None would supersede auto; it has a meaning to not do a nav while invalid is a no-op.

ntim: So none cancels the nav from the prev doc?

noamr: Yes

The none has the intent to cancel any view transitions from a previous document, while an invalid or empty string will be ignored. In the end, it resolved to return an empty string if it’s missing or invalid.

RESOLVED: navigation is a CSSOMString, it returns an empty string when navigation descriptor is missing or invalid

Onto the next item on the agenda. The discussion went into the view-transition-group property and whether it should have an order of precedence. Not to confuse with the pseudo-element of the same name (::view-transition-group) the view-transition-group property was resolved to be added somewhere in the future. As of right now, the tree of pseudo-elements created by view transitions is flattened:

::view-transition
├─ ::view-transition-group(name-1)
│  └─ ::view-transition-image-pair(name-1)
│     ├─ ::view-transition-old(name-1)
│     └─ ::view-transition-new(name-1)
├─ ::view-transition-group(name-2)
│  └─ ::view-transition-image-pair(name-2)
│     ├─ ::view-transition-old(name-2)
│     └─ ::view-transition-new(name-2)
│ /* and so one... */

However, we may want to nest transition groups into each other for more complex transitions, resulting in a tree with ::view-transition-group inside others ::view-transition-group, like the following:

::view-transition
├─ ::view-transition-group(container-a)
│  ├─ ::view-transition-group(name-1)
│  └─ ::view-transition-group(name-2)
└─ ::view-transition-group(container-b)
    ├─ ::view-transition-group(name-1)
    └─ ::view-transition-group(name-2)

So the view-transition-group property was born, or to be precise, it will be at some point in timer. It might look something close to the following syntax if I’m following along correctly:

view-transition-group: normal | <ident> | nearest | contain;
  • normal is contained by the root ::view-transition (current behavior).
  • <ident> will be contained by an element with a matching view-transition-name
  • nearest will be contained by its nearest ancestor with view-transition-name.
  • contain will contain all its descendants without changing the element’s position in the tree

The values seem simple, but they can conflict with each other. Imagine the following nested structure:

A  /* view-transition-name: foo */
└─ B /* view-transition-group: contain */
   └─ C /* view-transition-group: foo */

Here, B wants to contain C, but C explicitly says it wants to be contained by A. So, which wins?

vmpstr: Regarding nesting with view-transition-group, it takes keywords or ident. Contain says that all of the view-transition descendants are nested. Ident says same thing but also element itself will nest on the thing with that ident. Question is what happens if an element has a view-transition-group with a custom ident and also has an ancestor set to contain – where do we nest this? the contain one or the one with the ident? noam and I agree that ident should probably win, seems more specific.

<khush>: +1

The conversations continued if there should be a contain keyword that wins over <ident>

emilio: Agree that this seems desirable. Is there any use case for actually enforcing the containment? Do we need a strong contain? I don’t think so?

astearns: Somewhere along the line of adding a new keyword such as contain-idents?

<fantasai>: “contain-all”

emilio: Yeah, like sth to contain everything but needs a use case

But for now, it was set for <ident> to have more specificity than contain

PROPOSED RESOLUTION: idents take precedence over contain in view-transition-group

astearns: objections or concerns or questions?

<fantasai>: just as they do for <ident> values. (which also apply containment, but only to ‘normal’ elements)

RESOLVED: idents take precedence over contain in view-transition-group

Lastly, the main course of the discussion: whether or not some properties should be captured as styles instead of as a snapshot. Right now, view transitions work by taking a snapshot of the “old” view and transitioning to the “new” page. However, not everything is baked into the snapshot; some relevant properties are saved so they can be animated more carefully.

From the spec:

However, properties like mix-blend-mode which define how the element draws when it is embedded can’t be applied to its image. Such properties are applied to the element’s corresponding ::view-transition-group() pseudo-element, which is meant to generate a box equivalent to the element.

In short, some properties that depend on the element’s container are applied to the ::view-transition-group rather than ::view-transition-image-pair(). Since, in the future, we could nest groups inside groups, how we capture those properties has a lot more nuance.

noamr: Biggest issue we want to discuss today, how we capture and display nested components but also applies to non-nested view transition elements derived from the nested conversation. When we nest groups, some CSS properties that were previously not that important to capture are now very important because otherwise it looks broken. Two groups: tree effects like opacity, mask, clip-path, filters, perspective, these apply to entire tree; borders and border-radius because once you have a hierarchy of groups, and you have overflow then the overflow affects the origin where you draw the borders and shadows these also paint after backgrounds

noamr: We see three options.

  1. Change everything by default and don’t just capture snapshot but add more things that get captured as ?? instead of a flat snapshot (opacity, filter, transform, bg borders). Will change things because these styles are part of the group but have changed things before (but this is different as it changes observable computed style)
  2. Add new property view-transition-style or view-transition-capture-mode. Fan of the first as it reminds me of transform-style.
  3. To have this new property but give it auto value. If group contains other groups when you get the new mode so users using nesting get the new mode but can have a property to change the behavior If people want the old crossfade behavior they can always do so by regular DOM nesting

Regarding the first option about changing how all view transitions capture properties by default:

bramus: Yes, this would be breaking, but it would break in a good way. Regarding the name of the property, one of the values proposed is cross-fade, which is a value I wouldn’t recommend because authors can change the animation, e.g. to scale-up/ scale-down, etc. I would suggest a different name for the property, view-transition-capture-mode: flat | layered

Of course, changing how view transitions work is a dilemma to really think about:

noamr: There is some sentiment to 1 but I feel people need to think about this more?

astearns: Could resolve on option 1 and have blink try it out to see how much breakage there is and if its manageable then we’re good and come back to this. Would be resolving one 1 unless it’s not possible. I’d rather not define a new capture mode without a switch

…so the best course of action was to gather more data and decide:

khush: When we prototype we’ll find edge cases. We will take those back to the WG in that case. Want to get this right

noamr: It involves a lot of CSS props. Some of them are captured and not painted, while others are painted. The ones specifically would all be specified

After some more discussion, it was resolved to come back with compat data from browsers, you can read the full minutes at W3C.org. I bet there are a lot of interesting things I missed, so I encourage you to read it.

RESOLVED: Change the capture mode for all view-transitions and specify how each property is affected by this capture mode change

RESOLVED: Describe categorization of properties in the Module Interactions sections of each spec

RESOLVED: Blink will experiment and come back with changes needed if there are compat concerns


CSSWG Minutes Telecon (2024-08-21) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.



from CSS-Tricks https://ift.tt/Cr9YvdL
Gain $200 in a week
via Read more

Introducing

Category Image 076

I created a little library at work to make those “skeleton screens” that I’m not sure anyone likes. […] We named it skellyCSS because… skeletons and CSS, I guess. We still aren’t even really using it very much, but it was fun to do and it was the first node package I made myself (for the most part).

Regardless of whether or not anyone “likes” skeleton screens, they do come up and have their use cases. And they’re probably not something you want to rebuild time and again. Great use for a web component, I’d say! Maybe Ryan can get Uncle Dave to add it to his Awesome Standalones list. 😉

The other reason I’m sharing this link is that Ryan draws attention to the Web Components De-Mystified course that Scott Jehl recently published, something worth checking out of course, but that I needed a reminder for myself.


Introducing <shelly-wc> originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.



from CSS-Tricks https://ift.tt/yMOhIXK
Gain $200 in a week
via Read more

Useful Tools for Creating AVIF Images

Category Image 076

AVIF (AV1 Image File Format) is a modern image file format specification for storing images that offer a much more significant file reduction when compared to other formats like JPG, JPEG, PNG, and WebP. Version 1.0.0 of the AVIF specification was finalized in February 2019 and released by Alliance for Open Media to the public.

You save 50% of your file size when compared to JPG and 20% compared to WebP while still maintaining the image quality.

In this article, you will learn about some browser-based tools and command-line tools for creating AVIF images.

Why use AVIF over JPGs, PNGS, WebP, and GIF?

  • Lossless compression and lossy compression
  • JPEG suffers from awful banding
  • WebP is much better, but there’s still noticeable blockiness compared to the AVIF
  • Multiple color space
  • 8, 10, 12-bit color depth

Caveats

Jake Archibald, wrote an article a few years back on this new image format and also helped us to identify some disadvantages to compressing images, normally you should look out for these two when compressing to AVIF:

  1. If a user looks at the image in the context of the page, and it strikes them as ugly due to compression, then that level of compression is not acceptable. But, one tiny notch above that boundary is fine.
  2. It’s okay for the image to lose noticeable detail compared to the original unless that detail is significant to the context of the image.

See also: Addy Osmani at Smashing Magazine goes in-depth on using AVIF and WebP.

Data on support for the avif feature across the major browsers from caniuse.com

Browser Solutions

Squoosh

Screenshot of Squoosh.
Screenshot of Squoosh.

Squoosh is a popular image compression web app that allows you to convert images in numerous formats to other widely used compressed formats, including AVIF.

Features
  • File-size limit: 4MB
  • Image optimization settings (located on the right side)
  • Download controls – this includes seeing the size of the resulting file and the percentage reduction from the original image
  • Free to use

Cloudinary

Cloudinary’s free image-to-AVIF converter is another image tool that doesn’t require any form of code. All you need to do is upload your selected images (PNG, JPG, GIF, etc.) and it returns compressed versions of them. Its API even has more features besides creating AVIF images like its image enhancement and artificially generating filling for images.

I’m pretty sure you’re here because you’re looking for a free and fast converter. So, the browser solution should do.

Features

  • No stated file size limit
  • Free to use

You can find answers to common questions in the Cloudinary AVIF converter FAQ.

Command Line Solutions

avif-cli

avif-cli by lovell lets you take your images (PNG, JPEG, etc.) stored in a folder and converts them to AVIF images of your specified reduction size.

Here are the requirements and what you need to do:

  • Node.js 12.13.0+

Install the package:

npm install avif

Run the command in your terminal:

npx avif --input="./imgs/*" --output="./output/" --verbose
  • ./imgs/* – represents the location of all your image files
  • ./output/ – represents the location of your output folder
Features
  • Free to use
  • Speed of conversion can be set

You can find out about more commands via the avif-cli GitHub page.

sharp

sharp is another useful tool for converting large images in common formats to smaller, web-friendly AVIF images.

Here are the requirements and what you need to do:

  • Node.js 12.13.0+

Install the package:

npm install sharp

Create a JavaScript file named sharp-example.js and copy this code:

const sharp = require('sharp')

const convertToAVIF = () => {
    sharp('path_to_image')
    .toFormat('avif', {palette: true})
    .toFile(__dirname + 'path_to_output_image')
}

convertToAVIF()

Where path_to_image represents the path to your image with its name and extension, i.e.:

./imgs/example.jpg

And path_to_output_image represents the path you want your image to be stored with its name and new extension, i.e.:

/sharp-compressed/compressed-example.avif

Run the command in your terminal:

node sharp-example.js

And there! You should have a compressed AVIF file in your output location!

Features
  • Free to use
  • Images can be rotated, blurred, resized, cropped, scaled, and more using sharp

See also: Stanley Ulili’s article on How To Process Images in Node.js With Sharp.

Conclusion

AVIF is a technology that front-end developers should consider for their projects. These tools allow you to convert your existing JPEG and PNG images to AVIF format. But as with adopting any new tool in your workflow, the benefits and downsides will need to be properly evaluated in accordance with your particular use case.

I hope you enjoyed reading this article as much as I enjoyed writing it. Thank you so much for your time and I hope you have a great day ahead!


Useful Tools for Creating AVIF Images originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.



from CSS-Tricks https://ift.tt/bLlHG8x
Gain $200 in a week
via Read more

Basic keyboard shortcut support for focused links

Category Image 076

Eric gifting us with his research on all the various things that anchors (not links) do when they are in :focus.

Turns out, there’s a lot!

That’s an understatement! This is an incredible amount of work, even if Eric calls it “dry as a toast sandwich.” Boring ain’t always a bad thing. Let me simply drop in a pen that Dave put together pulling all of Eric’s findings into a table organized to compare the different behaviors between operating systems — and additional tables for each specific platform — because I think it helps frame Eric’s points.

That really is a lot! But why on Earth go through the trouble of documenting all of this?

All of the previously documented behavior needs to be built in JavaScript, since we need to go the synthetic link route. It also means that it is code we need to set aside time and resources to maintain.

That also assumes that is even possible to recreate every expected feature in JavaScript, which is not true. It also leaves out the mental gymnastics required to make a business case for prioritizing engineering efforts to re-make each feature.

There’s the rub! These are the behaviors you’re gonna need to mimic and maintain if veering away from semantic, native web elements. So what Eric is generously providing is perhaps an ultimate argument against adopting frameworks — or rolling some custom system — that purposely abstract the accessible parts of the web, often in favor of DX.

As with anything, there’s more than meets the eye to all this. Eric’s got an exhaustive list at the end there that calls out all the various limitations of his research. Most of those notes sound to me like there are many, many other platforms, edge cases, user agent variations, assistive technologies, and considerations that could also be taken into account, meaning we could be responsible for a much longer list of behaviors than what’s already there.

And yes, this sweatshirt is incredible. Indeed.


Basic keyboard shortcut support for focused links originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.



from CSS-Tricks https://ift.tt/YLyXexn
Gain $200 in a week
via Read more

Callbacks on Web Components?

Category Image 076

A gem from Chris Ferdinandi that details how to use custom events to hook into Web Components. More importantly, Chris dutifully explains why custom events are a better fit than, say, callback functions.

With a typical JavaScript library, you pass callbacks in as part of the instantiate process. […] Because Web Components self-instantiate, though, there’s no easy way to do that.

There’s a way to use callback functions, just not an “easy” way to go about it.

JavaScript provides developers with a way to emit custom events that developers can listen for with the Element.addEventListener() method.

We can use custom events to let developers hook into the code that we write and run more code in response to when things happen. They provide a really flexible way to extend the functionality of a library or code base.

Don’t miss the nugget about canceling custom events!


Callbacks on Web Components? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.



from CSS-Tricks https://ift.tt/whICV5D
Gain $200 in a week
via Read more

The Intersection of Speed and Proximity

Category Image 076

You ever find yourself in bumper-to-bumper traffic? I did this morning on the way to work (read: whatever cafe I fancy). There’s a pattern to it, right? Stop, go, stop, go, stop… it’s almost rhythmic and harmonious in the most annoying of ways. Everyone in line follows the dance, led by some car upfront, each subsequent vehicle pressed right up to the rear of the next for the luxury of moving a few feet further before the next step.

A closeup of three lanes of tight traffic from behind.
Photo by Jakob Jin

Have you tried breaking the pattern? Instead of playing shadow to the car in front of me this morning, I allowed space between us. I’d gradually raise my right foot off the brake pedal and depress the gas pedal only once my neighboring car gained a little momentum. At that point, my car begins to crawl. And continue crawling. I rarely had to tap the brakes at all once I got going. In effect, I had sacrificed proximity for a smoother ride. I may not be traveling the “fastest” in line, but I was certainly gliding along with a lot less friction.

I find that many things in life are like that. Getting closest to anything comes with a cost, be it financial or consequence. Want the VIP ticket to a concert you’re stoked as heck about? Pony up some extra cash. Want the full story rather than a headline? Just enter your email address. Want up-to-the-second information in your stock ticker? Hand over some account information. Want access to all of today’s televised baseball games? Pick up an ESPN+ subscription.

Proximity and speed are the commodities, the products so to speak. Closer and faster are what’s being sold.

You may have run into the “law of diminishing returns” in some intro-level economics class you took in high school or college. It’s the basis for a large swath of economic theory but in essence, is the “too much of a good thing” principle. It’s what AMPM commercials have been preaching this whole time.

I’m embedding the clip instead of linking it up because it clearly illustrates the “problem” of having too many of what you want (or need). Dude resorted to asking two teens to reach into his front pocket for his wallet because his hands were full, creeper. But buy on, the commercial says, because the implication is that there’s never too much of a good thing, even if it ends in a not-so-great situation chockfull of friction.

The only and only thing I took away from physics in college — besides gravity force being 9.8 m/s2 — is that there’s no way to have bigger, cheaper, and faster at the same time. You can take two, but all three cannot play together. For example, you can have a spaceship that’s faster and cheaper, but chances are that it ain’t gonna be bigger than a typical spaceship. If you were to aim for bigger, it’d be a lot less cheap, not only for the extra size but also to make the dang heavy thing go as fast as possible. It’s a good rule in life. I don’t have proof of it, but I’d wager Mick Jagger lives by it, or at least did at one time.

Speed. Proximity. Faster and slower. Closer and further. I’m not going to draw any parallels to web development, UX design, or any other front-end thing. They’re already there.


The Intersection of Speed and Proximity originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.



from CSS-Tricks https://ift.tt/oQHBawf
Gain $200 in a week
via Read more