Chris’ Corner: Monaspace

Typography Definitions Cover

I’m a sucker for a new coding font. I generally don’t think what coding font you use affects productivity in any significant way (unless it’s distracting) so farting around and switching it up is just a fun little thing to do. Like pushing around the furniture in your room or taking a different bike route to work.

So count me in for playing with GitHub’s new set of monospace coding fonts: Monaspace.

(Before I get too far without mentioning it, you can use them on CodePen just by visiting your editor settings and picking one.)

The fact that there is five different stylized versions alone is pretty note worthy, but there are a variety of other features that are pretty darn cool. Texture Healing feels like a made up term, but what it does is pretty easy to understand and clearly useful.

It basically makes it so wide letters don’t have to scrunch themselves and narrow letters don’t have to exaggerate themselves. They say:

Texture healing works by finding each pair of adjacent characters where one wants more space, and one has too much. Narrow characters are swapped for ones that cede some of their whitespace, and wider characters are swapped for ones that extend to the very edge of their box.

Which is done with the OpenType feature “contextual alternates”. It’s a little bit like a ligature, where certain character combinations flop out into a new glyph that works best with them together, only, ya know, slightly different.

Monaspace has ligatures too. Ligatures are pretty controversial in coding fonts. For instance some people really hate that != might turn into as a ligature as that is like a deviation from the literal syntax of the language you may be writing. They aren’t my favorite but it’s not a firmly held opinion and it’s easy enough just to not use them. (They aren’t enabled on CodePen, as it seems like it might be confusing for some beginners.)

They come as a variable font, which I love because variable fonts rule. And it’s not a phoned in variable font with one measly axis, it’s weight, and width, and slant, which is wonderful.

The download comes both with variable and non-variable versions. If you don’t plan to use the variable-ness, the variable fonts are an order of magnitude larger so probably use the non-variable versions. If it’s not on the web it probably doesn’t matter. Also curious they only shipped .woff and not .woff2?

My very favorite idea they have around this “superfamily” is the idea of mixing and matching them for different reasons in the same UI.

The idea of having an italic monospace is starting to be more and more common. Gotta shout out the OG Operator for breaking that ground. Setting comments in italics is still just a super cool idea to me. Now we could use Radon for that, a different typeface entirely designed for it. In the image above, they suggest JSDoc-style comments using Xenon and GitHub CoPilot using Krypton. I love it. Ship it. (But I don’t think it’s possible in VS Code yet, but considering Microsoft makes both, you’d think it’s coming.)

To make that possible, essentially, you need syntax highlighting tools to provide tokens/classes that make it easy to identify different Aspects of code. It’s certainly possible, and probably not even that hard, but it’s probably a somewhat tricky thing to roll out with so many third-party themes out there.

But anyway, how great is this?


If you’re into playing with different coding fonts, well make sure to explore the fact that we have over 20 of them on CodePen to switch between. But of course there are far more than that out in the wild. A great read about the landscape of “playful and fun” coding fonts is Doug Wilson’s Coding with Character. Doug dug (ha) up this great example of IBM “SELECTRIC” TYPE SAMPLES (literally different typewriter fonts you could swap between by replacing a metal ball in the typewriter):

So cool! I love that this new wave of coding fonts is kind of a callback to what was happening here, whether everyone recognizes it or not. The first one Doug mentions is Operator which I used for years and years after it came out and still love. The second is Comic Code:

My first thought: This has to be a joke, right?! Comic Sans has a bad reputation and was never meant to be used for coding—but what if…? That is what crazy mastermind Toshi Omagari seemed to ask.

He says, “Comic Code is a monospaced adaptation of the most over-hated typeface.” I haven’t asked, but I feel his thought process may have been something like this GIF.

Believe it or not, I think it actually works and certainly brings a smile—or at least a smirk—to your face

Totally agree! It works!


While I was in our code base putting in the Monaspace fonts, of course I couldn’t resist doing general code cleanup. The Monaspace fonts only ship in .woff, so the @font-face in CSS is pretty much as simple as:

@font-face {
  font-family: 'Font Name';
  src: url('./fonts/font.woff') format('woff');
}

You can get away with only shipping in woff2 these days, so some of our fonts that have that format, that’s exactly what I’m doing. The simplicity there just feels great since this used to be such a complex bit of code in the past with loads of formats.

Then I recently learned from an Ollie Williams blog post that you don’t have to put the format as a string anymore, keywords work, so:

@font-face {
  font-family: 'Font Name';
  src: url('./fonts/font.woff2') format(woff2);
}

Gotta appreciate those little things 😍. Check out Ollie’s post for other cleanups with @font-face including how you specify variable fonts, color fonts, other other font “tech”.

WooCommerce Updated to Address Data Tracking Issue

Featured Imgs 09
On May 28, 2024, Woo's engineering team discovered an issue within WooCommerce (versions 7.8 and above) that caused the unintentional collection of specific visitor data by Automattic, Woo's parent company.  This issue only pertained to WooCommerce stores that had data tracking enabled and did not have their store connected to Jetpack.

Navigating the PAM Landscape: Overcoming Deployment Barriers for Modern Security

Featured Imgs 26

Privileged access management (PAM) is critical for securing sensitive systems and data, especially with remote work's expanded attack surface. However, recent research by Keeper Security reveals significant barriers still inhibit broad PAM adoption. Cost and complexity top the list of challenges.

A survey of 400 IT and security leaders found 58% have not deployed PAM because it was too expensive. And 56% attempted PAM deployment but failed to fully implement due to excessive complexity. This indicates an appetite for robust PAM, but solutions remain out of reach for many.

Limited Conversations With Distributed Systems

Category Image 062

By the way, ChatGPT suggested the title: The Art of Balancing Control and Accessibility

Background

Houston Airport had this really big problem. Passengers complained about the time it took for luggage to arrive at the terminal building after the airplane had landed. The Airport invested millions to solve this pain point. They improved the process, hired more people, and introduced new technology. They eventually succeeded in reducing the wait time to 7 minutes. However, users still complained. The Airport realized that they had reached a point where optimizing the process/design was no longer optimal. So they did something different. They reframed the problem. By reframing the problem, they discovered that it was not the time it took to get the luggage to the terminal building that was the problem. It was the time the passengers had to wait for the luggage that was the problem. The Airport decided to park the airplanes further away from the terminal building. Consequently, it took some time for passengers to arrive at the terminal building, thus reducing the wait time for luggage, and voila! Complaints dropped drastically.

Teach Your LLM to Always Answer With Facts Not Fiction

Featured Imgs 26

Large Language Models are advanced AI systems that can answer a wide range of questions. Although they provide informative responses on topics they know, they are not always accurate on unfamiliar topics. This phenomenon is known as hallucination.

What Is Hallucination?

Before we look at an example of an LLM hallucination, let's consider a definition of the term "hallucination" as described by Wikipedia.com(opens a new window:

Remediating Incidents With the GitGuardian API [Cheat Sheet Included]

Fotolia Subscription Monthly 4685447 Xl Stock
When a hardcoded secret is detected in your source code, you can rely on GitGuardian to help you prioritize, investigate, and remediate the incident. When you think of the GitGuardian platform, most people picture the dashboard.
The GitGuardain Dashboard

From this view, you can quickly see high-level incident information that can help you triage your incidents, assign them to workspace members and begin the process of fixing the issue. The team has put a lot of thought and effort into making this a very user-friendly interface that customers can quickly learn and leverage when dealing with secrets sprawl.

The GitGuardian API

Some teams might prefer to leverage the power of the GitGuardian platform without using the dashboard directly in some cases. This is entirely possible thanks to the powerful GitGuardian API, which is available to all customers. With our API, you can interact with incidents, teams, workspace members, and audit logs; or even implement your own secrets scanning.

Video is not the main content of the page

Featured Imgs 11

Hello everyone, i am not able to fix this issue in the Search Console Video is not the main content of the page but actually it is!

When you enter in my product page the first thing that appears is the video it self! (hosted on imgur platform)

I even added additional Schema video tags but that seems is useless.

here is the code i am using:

P.S: $image4 variable sometimes can contain video link or image link.

<?php if (!empty($image4)): ?>
    <div class="swiper-slide">
        <div class="dz-media" itemscope itemtype="http://schema.org/VideoObject">
            <?php
            if (pathinfo($image4, PATHINFO_EXTENSION) === 'mp4') {
                // Display a video if the link points to an MP4 file
                echo '<meta itemprop="thumbnailUrl" content="' . $image1 . '">';
                echo '<meta itemprop="contentUrl" content="' . $image4 . '">';
                echo '<video controls width="100%" height="auto" poster="' . $image1 . '" itemprop="video">';
                echo '<source src="' . $image4 . '" type="video/mp4">';
                echo 'Your browser does not support the video tag.';
                echo '</video>';
            } elseif (in_array(pathinfo($image4, PATHINFO_EXTENSION), array('png', 'jpg', 'jpeg'))) {
                // Display an image if the link points to a PNG, JPG, or JPEG file
                echo '<img src="' . $image4 . '" alt="' . $product_tags . '">';
            }
            ?>
            <meta itemprop="name" content="<?php echo htmlspecialchars($product_name); ?>">
            <meta itemprop="description" content="<?php echo htmlspecialchars($product_description); ?>">
            <meta itemprop="uploadDate" content="<?php echo date('c', strtotime($upload_date)); ?>">
        </div>
    </div>
<?php endif; ?>

please anyone with the knowledge how to fix this problem

here is a link to a product page on my website
https://www.kupisi.mk/product/-/---/48

@Dani please remove the backlinks if possible, so my site doesn't get indexed :D

Does Google’s Disavow-Tool still work – or does it hurt?

Featured Imgs 11

Back in the days Google's Disavow Tool could be useful to disavow spammy backlinks, which competitors created to hurt the rankings of a website.

Negative SEO and Spamlink-attacks were (and are) real and can cause serious damages to businesses relying on Google's Organic Search as a Traffic Source.

Of course, also Spammers and Blackhat-SEO's made heavy use of this tool to simply "recover" their sites from penalties, they have caused themselves, when buying toxic backlinks.

My theory is, that the Disavow-Tool was/is used by Blackhats way more frequently than by Whitehat SEOs. And Google therefore might have slightly changed the function of it.

For example, many Backlinksellers advertise like this to calm their potential clients: "In case of consequences, there's Google's Disavow tool, so no worries".

I directly confronted John Muller (Search Advocate at Google) with this - you can see his response in the attached image. He obviously dodged my question, but sometimes no answer can be an answer too, right?

There's an very interesting case study on Reddit on this and i have a lot more info, which i will try to put into an article on DaniWeb this weekend.

I have found a lot of answers for many questions, but on still remains...

Does the Disavow Tool still help to recover a website's lost rankings caused by spammy links?

Or does it nowadays hurt the rankings of a site even more, because Google lets it as kind of a "honeypot" for Blackhatters, which try to recover from the penalties they caused themselves?

If anybody has something to share about this, would be very appreciated.

Thanks for your attention,
Chris

Scaling Success: Key Insights And Practical Takeaways

Category Image 080

Building successful web products at scale is a multifaceted challenge that demands a combination of technical expertise, strategic decision-making, and a growth-oriented mindset. In Success at Scale, I dive into case studies from some of the web’s most renowned products, uncovering the strategies and philosophies that propelled them to the forefront of their industries.

Here you will find some of the insights I’ve gleaned from these success stories, part of an ongoing effort to build a roadmap for teams striving to achieve scalable success in the ever-evolving digital landscape.

Cultivating A Mindset For Scaling Success

The foundation of scaling success lies in fostering the right mindset within your team. The case studies in Success at Scale highlight several critical mindsets that permeate the culture of successful organizations.

User-Centricity

Successful teams prioritize the user experience above all else.

They invest in understanding their users’ needs, behaviors, and pain points and relentlessly strive to deliver value. Instagram’s performance optimization journey exemplifies this mindset, focusing on improving perceived speed and reducing user frustration, leading to significant gains in engagement and retention.

By placing the user at the center of every decision, Instagram was able to identify and prioritize the most impactful optimizations, such as preloading critical resources and leveraging adaptive loading strategies. This user-centric approach allowed them to deliver a seamless and delightful experience to their vast user base, even as their platform grew in complexity.

Data-Driven Decision Making

Scaling success relies on data, not assumptions.

Teams must embrace a data-driven approach, leveraging metrics and analytics to guide their decisions and measure impact. Shopify’s UI performance improvements showcase the power of data-driven optimization, using detailed profiling and user data to prioritize efforts and drive meaningful results.

By analyzing user interactions, identifying performance bottlenecks, and continuously monitoring key metrics, Shopify was able to make informed decisions that directly improved the user experience. This data-driven mindset allowed them to allocate resources effectively, focusing on the areas that yielded the greatest impact on performance and user satisfaction.

Continuous Improvement

Scaling is an ongoing process, not a one-time achievement.

Successful teams foster a culture of continuous improvement, constantly seeking opportunities to optimize and refine their products. Smashing Magazine’s case study on enhancing Core Web Vitals demonstrates the impact of iterative enhancements, leading to significant performance gains and improved user satisfaction.

By regularly assessing their performance metrics, identifying areas for improvement, and implementing incremental optimizations, Smashing Magazine was able to continuously elevate the user experience. This mindset of continuous improvement ensures that the product remains fast, reliable, and responsive to user needs, even as it scales in complexity and user base.

Collaboration And Inclusivity

Silos hinder scalability.

High-performing teams promote collaboration and inclusivity, ensuring that diverse perspectives are valued and leveraged. The Understood’s accessibility journey highlights the power of cross-functional collaboration, with designers, developers, and accessibility experts working together to create inclusive experiences for all users.

By fostering open communication, knowledge sharing, and a shared commitment to accessibility, The Understood was able to embed inclusive design practices throughout its development process. This collaborative and inclusive approach not only resulted in a more accessible product but also cultivated a culture of empathy and user-centricity that permeated all aspects of their work.

Making Strategic Decisions for Scalability

Beyond cultivating the right mindset, scaling success requires making strategic decisions that lay the foundation for sustainable growth.

Technology Choices

Selecting the right technologies and frameworks can significantly impact scalability. Factors like performance, maintainability, and developer experience should be carefully considered. Notion’s migration to Next.js exemplifies the importance of choosing a technology stack that aligns with long-term scalability goals.

By adopting Next.js, Notion was able to leverage its performance optimizations, such as server-side rendering and efficient code splitting, to deliver fast and responsive pages. Additionally, the developer-friendly ecosystem of Next.js and its strong community support enabled Notion’s team to focus on building features and optimizing the user experience rather than grappling with low-level infrastructure concerns. This strategic technology choice laid the foundation for Notion’s scalable and maintainable architecture.

Ship Only The Code A User Needs, When They Need It

This best practice is so important when we want to ensure that pages load fast without over-eagerly delivering JavaScript a user may not need at that time. For example, Instagram made a concerted effort to improve the web performance of instagram.com, resulting in a nearly 50% cumulative improvement in feed page load time. A key area of focus has been shipping less JavaScript code to users, particularly on the critical rendering path.

The Instagram team found that the uncompressed size of JavaScript is more important for performance than the compressed size, as larger uncompressed bundles take more time to parse and execute on the client, especially on mobile devices. Two optimizations they implemented to reduce JS parse/execute time were inline requires (only executing code when it’s first used vs. eagerly on initial load) and serving ES2017+ code to modern browsers to avoid transpilation overhead. Inline requires improved Time-to-Interactive metrics by 12%, and the ES2017+ bundle was 5.7% smaller and 3% faster than the transpiled version.

While good progress has been made, the Instagram team acknowledges there are still many opportunities for further optimization. Potential areas to explore could include the following:

  • Improved code-splitting, moving more logic off the critical path,
  • Optimizing scrolling performance,
  • Adapting to varying network conditions,
  • Modularizing their Redux state management.

Continued efforts will be needed to keep instagram.com performing well as new features are added and the product grows in complexity.

Accessibility Integration

Accessibility should be an integral part of the product development process, not an afterthought.

Wix’s comprehensive approach to accessibility, encompassing keyboard navigation, screen reader support, and infrastructure for future development, showcases the importance of building inclusivity into the product’s core.

By considering accessibility requirements from the initial design stages and involving accessibility experts throughout the development process, Wix was able to create a platform that empowered its users to build accessible websites. This holistic approach to accessibility not only benefited end-users but also positioned Wix as a leader in inclusive web design, attracting a wider user base and fostering a culture of empathy and inclusivity within the organization.

Developer Experience Investment

Investing in a positive developer experience is essential for attracting and retaining talent, fostering productivity, and accelerating development.

Apideck’s case study in the book highlights the impact of a great developer experience on community building and product velocity.

By providing well-documented APIs, intuitive SDKs, and comprehensive developer resources, Apideck was able to cultivate a thriving developer community. This investment in developer experience not only made it easier for developers to integrate with Apideck’s platform but also fostered a sense of collaboration and knowledge sharing within the community. As a result, ApiDeck was able to accelerate product development, leverage community contributions, and continuously improve its offering based on developer feedback.

Leveraging Performance Optimization Techniques

Achieving optimal performance is a critical aspect of scaling success. The case studies in Success at Scale showcase various performance optimization techniques that have proven effective.

Progressive Enhancement and Graceful Degradation

Building resilient web experiences that perform well across a range of devices and network conditions requires a progressive enhancement approach. Pinafore’s case study in Success at Scale highlights the benefits of ensuring core functionality remains accessible even in low-bandwidth or JavaScript-constrained environments.

By leveraging server-side rendering and delivering a usable experience even when JavaScript fails to load, Pinafore demonstrates the importance of progressive enhancement. This approach not only improves performance and resilience but also ensures that the application remains accessible to a wider range of users, including those with older devices or limited connectivity. By gracefully degrading functionality in constrained environments, Pinafore provides a reliable and inclusive experience for all users.

Adaptive Loading Strategies

The book’s case study on Tinder highlights the power of sophisticated adaptive loading strategies. By dynamically adjusting the content and resources delivered based on the user’s device capabilities and network conditions, Tinder ensures a seamless experience across a wide range of devices and connectivity scenarios. Tinder’s adaptive loading approach involves techniques like dynamic code splitting, conditional resource loading, and real-time network quality detection. This allows the application to optimize the delivery of critical resources, prioritize essential content, and minimize the impact of poor network conditions on the user experience.

By adapting to the user’s context, Tinder delivers a fast and responsive experience, even in challenging environments.

Efficient Resource Management

Effective management of resources, such as images and third-party scripts, can significantly impact performance. eBay’s journey showcases the importance of optimizing image delivery, leveraging techniques like lazy loading and responsive images to reduce page weight and improve load times.

By implementing lazy loading, eBay ensures that images are only loaded when they are likely to be viewed by the user, reducing initial page load time and conserving bandwidth. Additionally, by serving appropriately sized images based on the user’s device and screen size, eBay minimizes the transfer of unnecessary data and improves the overall loading performance. These resource management optimizations, combined with other techniques like caching and CDN utilization, enable eBay to deliver a fast and efficient experience to its global user base.

Continuous Performance Monitoring

Regularly monitoring and analyzing performance metrics is crucial for identifying bottlenecks and opportunities for optimization. The case study on Yahoo! Japan News demonstrates the impact of continuous performance monitoring, using tools like Lighthouse and real user monitoring to identify and address performance issues proactively.

By establishing a performance monitoring infrastructure, Yahoo! Japan News gains visibility into the real-world performance experienced by their users. This data-driven approach allows them to identify performance regression, pinpoint specific areas for improvement, and measure the impact of their optimizations. Continuous monitoring also enables Yahoo! Japan News to set performance baselines, track progress over time, and ensure that performance remains a top priority as the application evolves.

Embracing Accessibility and Inclusive Design

Creating inclusive web experiences that cater to diverse user needs is not only an ethical imperative but also a critical factor in scaling success. The case studies in Success at Scale emphasize the importance of accessibility and inclusive design.

Comprehensive Accessibility Testing

Ensuring accessibility requires a combination of automated testing tools and manual evaluation. LinkedIn’s approach to automated accessibility testing demonstrates the value of integrating accessibility checks into the development workflow, catching potential issues early, and reducing the reliance on manual testing alone.

By leveraging tools like Deque’s axe and integrating accessibility tests into their continuous integration pipeline, LinkedIn can identify and address accessibility issues before they reach production. This proactive approach to accessibility testing not only improves the overall accessibility of the platform but also reduces the cost and effort associated with retroactive fixes. However, LinkedIn also recognizes the importance of manual testing and user feedback in uncovering complex accessibility issues that automated tools may miss. By combining automated checks with manual evaluation, LinkedIn ensures a comprehensive approach to accessibility testing.

Inclusive Design Practices

Designing with accessibility in mind from the outset leads to more inclusive and usable products. Success With Scale\’s case study on Intercom about creating an accessible messenger highlights the importance of considering diverse user needs, such as keyboard navigation and screen reader compatibility, throughout the design process.

By embracing inclusive design principles, Intercom ensures that their messenger is usable by a wide range of users, including those with visual, motor, or cognitive impairments. This involves considering factors such as color contrast, font legibility, focus management, and clear labeling of interactive elements. By designing with empathy and understanding the diverse needs of their users, Intercom creates a messenger experience that is intuitive, accessible, and inclusive. This approach not only benefits users with disabilities but also leads to a more user-friendly and resilient product overall.

User Research And Feedback

Engaging with users with disabilities and incorporating their feedback is essential for creating truly inclusive experiences. The Understood’s journey emphasizes the value of user research and collaboration with accessibility experts to identify and address accessibility barriers effectively.

By conducting usability studies with users who have diverse abilities and working closely with accessibility consultants, The Understood gains invaluable insights into the real-world challenges faced by their users. This user-centered approach allows them to identify pain points, gather feedback on proposed solutions, and iteratively improve the accessibility of their platform.

By involving users with disabilities throughout the design and development process, The Understood ensures that their products not only meet accessibility standards but also provide a meaningful and inclusive experience for all users.

Accessibility As A Shared Responsibility

Promoting accessibility as a shared responsibility across the organization fosters a culture of inclusivity. Shopify’s case study underscores the importance of educating and empowering teams to prioritize accessibility, recognizing it as a fundamental aspect of the user experience rather than a mere technical checkbox.

By providing accessibility training, guidelines, and resources to designers, developers, and content creators, Shopify ensures that accessibility is considered at every stage of the product development lifecycle. This shared responsibility approach helps to build accessibility into the core of Shopify’s products and fosters a culture of inclusivity and empathy. By making accessibility everyone’s responsibility, Shopify not only improves the usability of their platform but also sets an example for the wider industry on the importance of inclusive design.

Fostering A Culture of Collaboration And Knowledge Sharing

Scaling success requires a culture that promotes collaboration, knowledge sharing, and continuous learning. The case studies in Success at Scale highlight the impact of effective collaboration and knowledge management practices.

Cross-Functional Collaboration

Breaking down silos and fostering cross-functional collaboration accelerates problem-solving and innovation. Airbnb’s design system journey showcases the power of collaboration between design and engineering teams, leading to a cohesive and scalable design language across web and mobile platforms.

By establishing a shared language and a set of reusable components, Airbnb’s design system enables designers and developers to work together more efficiently. Regular collaboration sessions, such as design critiques and code reviews, help to align both teams and ensure that the design system evolves in a way that meets the needs of all stakeholders. This cross-functional approach not only improves the consistency and quality of the user experience but also accelerates the development process by reducing duplication of effort and promoting code reuse.

Knowledge Sharing And Documentation

Capturing and sharing knowledge across the organization is crucial for maintaining consistency and enabling the efficient onboarding of new team members. Stripe’s investment in internal frameworks and documentation exemplifies the value of creating a shared understanding and facilitating knowledge transfer.

By maintaining comprehensive documentation, code examples, and best practices, Stripe ensures that developers can quickly grasp the intricacies of their internal tools and frameworks. This documentation-driven culture not only reduces the learning curve for new hires but also promotes consistency and adherence to established patterns and practices. Regular knowledge-sharing sessions, such as tech talks and lunch-and-learns, further reinforce this culture of learning and collaboration, enabling team members to learn from each other’s experiences and stay up-to-date with the latest developments.

Communities Of Practice

Establishing communities of practice around specific domains, such as accessibility or performance, promotes knowledge sharing and continuous improvement. Shopify’s accessibility guild demonstrates the impact of creating a dedicated space for experts and advocates to collaborate, share best practices, and drive accessibility initiatives forward.

By bringing together individuals passionate about accessibility from across the organization, Shopify’s accessibility guild fosters a sense of community and collective ownership. Regular meetings, workshops, and hackathons provide opportunities for members to share their knowledge, discuss challenges, and collaborate on solutions. This community-driven approach not only accelerates the adoption of accessibility best practices but also helps to build a culture of inclusivity and empathy throughout the organization.

Leveraging Open Source And External Expertise

Collaborating with the wider developer community and leveraging open-source solutions can accelerate development and provide valuable insights. Pinafore’s journey highlights the benefits of engaging with accessibility experts and incorporating their feedback to create a more inclusive and accessible web experience.

By actively seeking input from the accessibility community and leveraging open-source accessibility tools and libraries, Pinafore was able to identify and address accessibility issues more effectively. This collaborative approach not only improved the accessibility of the application but also contributed back to the wider community by sharing their learnings and experiences. By embracing open-source collaboration and learning from external experts, teams can accelerate their own accessibility efforts and contribute to the collective knowledge of the industry.

The Path To Sustainable Success

Achieving scalable success in the web development landscape requires a multifaceted approach that encompasses the right mindset, strategic decision-making, and continuous learning. The Success at Scale book provides a comprehensive exploration of these elements, offering deep insights and practical guidance for teams at all stages of their scaling journey.

By cultivating a user-centric, data-driven, and inclusive mindset, teams can prioritize the needs of their users and make informed decisions that drive meaningful results. Adopting a culture of continuous improvement and collaboration ensures that teams are always striving to optimize and refine their products, leveraging the collective knowledge and expertise of their members.

Making strategic technology choices, such as selecting performance-oriented frameworks and investing in developer experience, lays the foundation for scalable and maintainable architectures. Implementing performance optimization techniques, such as adaptive loading, efficient resource management, and continuous monitoring, helps teams deliver fast and responsive experiences to their users.

Embracing accessibility and inclusive design practices not only ensures that products are usable by a wide range of users but also fosters a culture of empathy and user-centricity. By incorporating accessibility testing, inclusive design principles, and user feedback into the development process, teams can create products that are both technically sound and meaningfully inclusive.

Fostering a culture of collaboration, knowledge sharing, and continuous learning is essential for scaling success. By breaking down silos, promoting cross-functional collaboration, and investing in documentation and communities of practice, teams can accelerate problem-solving, drive innovation, and build a shared understanding of their products and practices.

The case studies featured in Success at Scale serve as powerful examples of how these principles and strategies can be applied in real-world contexts. By learning from the successes and challenges of industry leaders, teams can gain valuable insights and inspiration for their own scaling journeys.

As you embark on your path to scaling success, remember that it is an ongoing process of iteration, learning, and adaptation. Embrace the mindsets and strategies outlined in this article, dive deeper into the learnings from the Success at Scale book, and continually refine your approach based on the unique needs of your users and the evolving landscape of web development.

Conclusion

Scaling successful web products requires a holistic approach that combines technical excellence, strategic decision-making, and a growth-oriented mindset. By learning from the experiences of industry leaders, as showcased in the Success at Scale book, teams can gain valuable insights and practical guidance on their journey towards sustainable success.

Cultivating a user-centric, data-driven, and inclusive mindset lays the foundation for scalability. By prioritizing the needs of users, making informed decisions based on data, and fostering a culture of continuous improvement and collaboration, teams can create products that deliver meaningful value and drive long-term growth.

Making strategic decisions around technology choices, performance optimization, accessibility integration, and developer experience investment sets the stage for scalable and maintainable architectures. By leveraging proven optimization techniques, embracing inclusive design practices, and investing in the tools and processes that empower developers, teams can build products that are fast and resilient.

Through ongoing collaboration, knowledge sharing, and a commitment to learning, teams can navigate the complexities of scaling success and create products that make a lasting impact in the digital landscape.

We’re Trying Out Something New

In an effort to conserve resources here at Smashing, we’re trying something new with Success at Scale. The printed book is 304 pages, and we make an expanded PDF version available to everyone who purchases a print book. This accomplishes a few good things:

  • We will use less paper and materials because we are making a smaller printed book;
  • We’ll use fewer resources in general to print, ship, and store the books, leading to a smaller carbon footprint; and
  • Keeping the book at more manageable size means we can continue to offer free shipping on all Smashing orders!

Smashing Books have always been printed with materials from FSC Certified forests. We are committed to finding new ways to conserve resources while still bringing you the best possible reading experience.

Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Heather and Steven are two of these people. Have you checked out their books already?

Understanding Privacy

Everything you need to know to put your users first and make a better web.

Get Print + eBook

Touch Design for Mobile Interfaces

Learn how touchscreen devices really work — and how people really use them.

Get Print + eBook

Interface Design Checklists

100 practical cards for common interface design challenges.

Get Print + eBook

The Potential of Vision AI for Industrial Deployments

Fotolia Subscription Monthly 4685447 Xl Stock

Vision AI does not only replicate human vision but can also go beyond that in offering highly accurate accounts of environmental features that are not readily visible to the human eye. However, while edge AI has been around for a while, enhancing edge capabilities with computer vision is still a novelty. Those who have ventured into improving production processes, safety, and quality with the help of vision AI, however, are already reaping the benefits. 

When equipped with vision AI, industrial enterprises can take full control of their assets on the edge and build a truly collaborative foundation for a multitude of use cases. This will allow them to tackle the challenges of a dynamic setting that includes many unknowns.

Top YouTube Channels for Blender Tutorials: From Beginners to Advanced

Featured Imgs 23

Blender is a powerful and free 3D creation suite that has gained immense popularity among artists, designers, and animators. To help you navigate through the vast array of tutorials available, we’ve compiled a list of the top 10 YouTube channels that offer the best Blender tutorials for all skill levels.

Blender Guru

Blender Guru

Blender Guru, hosted by Andrew Price, is a must-follow channel for anyone serious about mastering Blender. Known for his engaging and in-depth tutorials, Andrew covers everything from basic modeling to advanced rendering techniques. His “Blender Beginner Tutorial Series” is particularly popular and a great starting point for newcomers.

Source

CG Cookie

CG Cookie

CG Cookie is a well-established name in the Blender community, offering high-quality tutorials on a wide range of topics. Their YouTube channel is an extension of their educational platform, featuring tutorials that cater to both beginners and advanced users. CG Cookie’s tutorials are known for their clarity and depth.

Source

Blender Official

Blender Official

Blender Official is the primary channel managed by the Blender Foundation itself. It features tutorials, news, and updates straight from the developers, making it an essential resource for staying up-to-date with the latest Blender features and enhancements.

Source

CG Geek

CG Geek

If you’re looking for detailed and fun tutorials, CG Geek is the channel for you. Run by Steve Lund, this channel offers a mix of beginner and advanced tutorials, including impressive 3D modeling and animation projects. Steve’s tutorials are well-explained and perfect for those who want to take their skills to the next level.

Source

Blender Studio

Blender Studio

Blender Studio, also managed by the Blender Foundation, offers behind-the-scenes content, project breakdowns, and advanced tutorials used in professional productions. This channel is perfect for users looking to see how Blender is used in high-end projects.

Source

Grant Abbitt

Grant Abbitt

Grant Abbitt’s channel is a treasure trove of tutorials that range from beginner to advanced levels. Grant has a friendly teaching style and covers a wide range of topics, including character modeling, sculpting, and animation. His step-by-step guides are comprehensive and easy to follow.

Source

CrossMind Studio

CrossMind Studio

CrossMind Studio provides high-quality tutorials focused on various aspects of Blender, including modeling, texturing, and animation. The tutorials are detailed and well-structured, catering to both beginners and experienced users.

Source

Ducky 3D

Ducky 3D

Ducky 3D focuses on creating stunning visual effects and abstract art with Blender. This channel is ideal for those interested in the artistic side of Blender, offering tutorials that are both creative and technically detailed. Ducky 3D’s tutorials are easy to follow, making complex effects achievable for users at all levels.

Source

Polyfjord

Polyfjord

Polyfjord offers creative and visually stunning tutorials on Blender. The channel focuses on unique and artistic projects, making it a great resource for those looking to expand their creative skills with Blender.

Source

CG Boost

CG Boost

CG Boost is known for its clear and well-produced tutorials on Blender. The channel covers a wide range of topics, including character creation, environment modeling, and rendering. CG Boost also runs regular art challenges that inspire and motivate its community.

Source

Josh Gambrell

Josh Gambrell

Josh Gambrell specializes in hard surface modeling tutorials. His channel is perfect for users interested in creating detailed mechanical and industrial models. Josh’s tutorials are thorough and easy to follow, making complex techniques accessible.

Source

SouthernShotty

SouthernShotty

SouthernShotty provides engaging and entertaining tutorials on Blender, often focusing on creating stylized 3D art. The channel’s relaxed and humorous approach makes learning Blender fun and accessible, especially for beginners.

Source

Ryan King Art

Ryan King Art

Ryan King Art offers a mix of beginner and intermediate tutorials on Blender. The channel covers a variety of topics, including modeling, sculpting, and texturing. Ryan’s friendly teaching style makes learning Blender enjoyable and straightforward.

Source

CGMatter

CGMatter

CGMatter focuses on quick, efficient tutorials that cover a wide range of Blender features and techniques. The channel is ideal for users who want to learn new tricks and tips in a short amount of time.

Source

Derek Elliott

Derek Elliott

Derek Elliott’s channel offers high-quality tutorials with a focus on product visualization and animation. Derek’s tutorials are professional and detailed, making them suitable for users looking to create polished, commercial-quality renders.

Source

Blender Secrets

Blender Secrets

Source

BlenderDiplom

BlenderDiplom

Source

Ian Letarte

Ian Letarte

Source

Chocofur

Chocofur

Source

Sardi Pax

Sardi Pax

Source

The post Top YouTube Channels for Blender Tutorials: From Beginners to Advanced appeared first on CSS Author.

How To Create A Weekly Google Analytics Report That Posts To Slack

Featured Imgs 29

Google Analytics is great, but not everyone in your organization will be granted access. In many places I’ve worked, it was on a kind of “need to know” basis.

In this article, I’m gonna flip that on its head and show you how I wrote a GitHub Action that queries Google Analytics, generates a top ten list of the most frequently viewed pages on my site from the last seven days and compares them to the previous seven days to tell me which pages have increased in views, which pages have decreased in views, which pages have stayed the same and which pages are new to the list.

The report is then nicely formatted with icon indicators and posted to a public Slack channel every Friday at 10 AM.

Not only would this surfaced data be useful for folks who might need it, but it also provides an easy way to copy and paste or screenshot the report and add it to a slide for the weekly company/department meeting.

Here’s what the finished report looks like in Slack, and below, you’ll find a link to the GitHub Repository.

GitHub

To use this repository, follow the steps outlined in the README.

https://github.com/PaulieScanlon/smashing-weekly-analytics

Prerequisites

To build this workflow, you’ll need admin access to your Google Analytics and Slack Accounts and administrator privileges for GitHub Actions and Secrets for a GitHub repository.

Customizing the Report and Action

Naturally, all of the code can be changed to suit your requirements, and in the following sections, I’ll explain the areas you’ll likely want to take a look at.

Customizing the GitHub Action

The file name of the Action weekly-analytics.report.yml isn’t seen anywhere other than in the code/repo but naturally, change it to whatever you like, you won’t break anything.

The name and jobs: names detailed below are seen in the GitHub UI and Workflow logs.

The cron syntax determines when the Action will run. Schedules use POSIX cron syntax and by changing the numbers you can determine when the Action runs.

You could also change the secrets variable names; just make sure you update them in your repository Settings.

# .github/workflows/weekly-analytics-report.yml

name: Weekly Analytics Report

on:
schedule:
– cron: ‘0 10 * * 5′ # Runs every Friday at 10 AM UTC
workflow_dispatch: # Allows manual triggering

jobs:
analytics-report:
runs-on: ubuntu-latest

env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
GA4_PROPERTY_ID: ${{ secrets.GA4_PROPERTY_ID }}
GOOGLE_APPLICATION_CREDENTIALS_BASE64: ${{ secrets.GOOGLE_APPLICATION_CREDENTIALS_BASE64 }}

steps:
– name: Checkout repository
uses: actions/checkout@v4

– name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ’20.x’

– name: Install dependencies
run: npm install

– name: Run the JavaScript script
run: node src/services/weekly-analytics.js

Customizing the Google Analytics Report

The Google Analytics API request I’m using is set to pull the fullPageUrl and pageTitle for the totalUsers in the last seven days, and a second request for the previous seven days, and then aggregates the totals and limits the responses to 10.

You can use Google’s GA4 Query Explorer to construct your own query, then replace the requests.

// src/services/weekly-analytics.js#L75

const [thisWeek] = await analyticsDataClient.runReport({
property: `properties/${process.env.GA4_PROPERTY_ID}`,
dateRanges: [
{
startDate: ‘7daysAgo’,
endDate: ‘today’,
},
],
dimensions: [
{
name: ‘fullPageUrl’,
},
{
name: ‘pageTitle’,
},
],
metrics: [
{
name: ‘totalUsers’,
},
],
limit: reportLimit,
metricAggregations: [‘MAXIMUM’],
});

Creating the Comparisons

There are two functions to determine which page views have increased, decreased, stayed the same, or are new.

The first is a simple reduce function that returns the URL and a count for each.

const lastWeekMap = lastWeekResults.reduce((items, item) => {
const { url, count } = item;
items[url] = count;
return items;
}, {});

The second maps over the results from this week and compares them to last week.

// Generate the report for this week
const report = thisWeekResults.map((item, index) => {
const { url, title, count } = item;
const lastWeekCount = lastWeekMap[url];
const status = determineStatus(count, lastWeekCount);

return {
position: (index + 1).toString().padStart(2, ‘0’), // Format the position with leading zero if it’s less than 10
url,
title,
count: { thisWeek: count, lastWeek: lastWeekCount || ‘0’ }, // Ensure lastWeekCount is displayed as ‘0’ if not found
status,
};
});

The final function is used to determine the status of each.

// Function to determine the status
const determineStatus = (count, lastWeekCount) => {
const thisCount = Number(count);
const previousCount = Number(lastWeekCount);

if (lastWeekCount === undefined || lastWeekCount === ‘0’) {
return NEW;
}

if (thisCount > previousCount) {
return HIGHER;
}

if (thisCount < previousCount) {
return LOWER;
}

return SAME;
};

I’ve purposely left the code fairly verbose, so it’ll be easier for you to add console.log to each of the functions to see what they return.

Customizing the Slack Message

The Slack message config I’m using creates a heading with an emoji, a divider, and a paragraph explaining what the message is.

Below that I’m using the context object to construct a report by iterating over comparisons and returning an object containing Slack specific message syntax which includes an icon, a count, the name of the page and a link to each item.

You can use Slack’s Block Kit Builder to construct your own message format.

// src/services/weekly-analytics.js#151

const slackList = report.map((item, index) => {
const {
position,
url,
title,
count: { thisWeek, lastWeek },
status,
} = item;

return {
type: ‘context’,
elements: [
{
type: ‘image’,
image_url: ${reportConfig.url}/images/${status},
alt_text: ‘icon’,
},
{
type: ‘mrkdwn’,
text: ${position}. &lt;${url}|${title}&gt; | *${x${thisWeek}}`* / x${lastWeek}`,
},
],
};
});

Before you can run the GitHub Action, you will need to complete a number of Google, Slack, and GitHub steps.

Ready to get going?

Creating a Google Cloud Project

Head over to your Google Cloud console, and from the dropdown menu at the top of the screen, click Select a project, and when the modal opens up, click NEW PROJECT.

Project name

On the next screen, give your project a name and click CREATE. In my example, I’ve named the project smashing-weekly-analytics.

Enable APIs & Services

In this step, you’ll enable the Google Analytics Data API for your new project. From the left-hand sidebar, navigate to APIs & Services > Enable APIs & services. At the top of the screen, click + ENABLE APIS & SERVICES.

Enable Google Analytics Data API

Search for “Google analytics data API,” select it from the list, then click ENABLE.

Create Credentials for Google Analytics Data API

With the API enabled in your project, you can now create the required credentials. Click the CREATE CREDENTIALS button at the top right of the screen to set up a new Service account.

A Service account allows an “application” to interact with Google APIs, providing the credentials include the required services. In this example, the credentials grant access to the Google Analytics Data API.

Service Account Credentials Type

On the next screen, select Google Analytics Data API from the dropdown menu and Application data, then click NEXT.

Service Account Details

On the next screen, give your Service account a name, ID, and description (optional). Then click CREATE AND CONTINUE.

In my example, I’ve given my service account a name and ID of smashing-weekly-analytics and added a short description that explains what the service account does.

Service Account Role

On the next screen, select Owner for the Role, then click CONTINUE.

Service Account Done

You can leave the fields blank in this last step and click DONE when you’re ready.

Service Account Keys

From the left-hand navigation, select Service Accounts, then click the “more dots” to open the menu and select Manage keys.

Service Accounts Add Key

On the next screen, locate the KEYS tab at the top of the screen, then click ADD KEY and select Create new key.

Service Accounts Download Keys

On the next screen, select JSON as the key type, then click CREATE to download your Google Application credentials .json file.

Google Application Credentials

If you open the .json file in your code editor, you should be looking at something similar to the one below.

In case you’re wondering, no, you can’t use an object as a variable defined in an .env file. To use these credentials, it’s necessary to convert the whole file into a base64 string.

Note: I wrote a more detailed post about how to use Google Application credentials as environment variables here: “How to Use Google Application .json Credentials in Environment Variables.”

From your terminal, run the following: replace name-of-creds-file.json with the name of your .json file.

cat name-of-creds-file.json | base64

If you’ve already cloned the repo and followed the Getting started steps in the README, add the base64 string returned after running the above and add it to the GOOGLE_APPLICATION_CREDENTIALS_BASE64 variable in your .env file, but make sure you wrap the string with double quotation makes.

GOOGLE_APPLICATION_CREDENTIALS_BASE64=”abc123″

That completes the Google project side of things. The next step is to add your service account email to your Google Analytics property and find your Google Analytics Property ID.

Google Analytics Properties

Whilst your service account now has access to the Google Analytics Data API, it doesn’t yet have access to your Google Analytics account.

Get Google Analytics Property ID

To make queries to the Google Analytics API, you’ll need to know your Property ID. You can find it by heading over to your Google Analytics account. Make sure you’re on the correct property (in the screenshot below, I’ve selected paulie.dev — GA4).

Click the admin cog in the bottom left-hand side of the screen, then click Property details.

On the next screen, you’ll see the PROPERTY ID in the top right corner. If you’ve already cloned the repo and followed the Getting started steps in the README, add the property ID value to the GA4_PROPERTY_ID variable in your .env file.

Add Client Email to Google Analytics

From the Google application credential .json file you downloaded earlier, locate the client_email and copy the email address.

In my example, it looks like this: smashing-weekly-analytics@smashing-weekly-analytics.iam.gserviceaccount.com.

Now navigate to Property access management from the left hide side navigation and click the + in the top right-hand corner, then click Add users.

On the next screen, add the client_email to the Email addresses input, uncheck Notify new users by email, and select Viewer under Direct roles and data restrictions, then click Add.

That completes the Google Analytics properties section. Your “application” will use the Google application credentials containing the client_email and will now have access to your Google Analytics account via the Google Analytics Data API.

Slack Channels and Webhook

In the following steps, you’ll create a new Slack channel that will be used to post messages sent from your “application” using a Slack Webhook.

Creating The Slack Channel

Create a new channel in your Slack workspace. I’ve named mine #weekly-analytics-report. You’ll need to set this up before proceeding to the next step.

Creating a Slack App

Head over to the slack api dashboard, and click Create an App.

On the next screen, select From an app manifest.

On the next screen, select your Slack workspace, then click Next.

On this screen, you can give your app a name. In my example, I’ve named my Weekly Analytics Report. Click Next when you’re ready.

On step 3, you can just click Done.

With the App created, you can now set up a Webhook.

Creating a Slack Webhook

Navigate to Incoming Webhooks from the left-hand navigation, then switch the Toggle to On to activate incoming webhooks. Then, at the bottom of the screen, click Add New Webook to Workspace.

On the next screen, select your Slack workspace and a channel that you’d like to use to post messages, too, and click Allow.

You should now see your new Slack Webhook with a copy button. Copy the Webhook URL, and if you’ve already cloned the repo and followed the Getting started steps in the README, add the Webhook URL to the SLACK_WEBHOOK_URL variable in your .env file.

Slack App Configuration

From the left-hand navigation, select Basic Information. On this screen, you can customize your app and add an icon and description. Be sure to click Save Changes when you’re done.

If you now head over to your Slack, you should see that your app has been added to your workspace.

That completes the Slack section of this article. It’s now time to add your environment variables to GitHub Secrets and run the workflow.

Add GitHub Secrets

Head over to the Settings tab of your GitHub repository, then from the left-hand navigation, select Secrets and variables, then click Actions.

Add the three variables from your .env file under Repository secrets.

A note on the base64 string: You won’t need to include the double quotes!

Run Workflow

To test if your Action is working correctly, head over to the Actions tab of your GitHub repository, select the Job name (Weekly Analytics Report), then click Run workflow.

If everything worked correctly, you should now be looking at a nicely formatted list of the top ten page views on your site in Slack.

Finished

And that’s it! A fully automated Google Analytics report that posts directly to your Slack. I’ve worked in a few places where Google Analytics data was on lockdown, and I think this approach to sharing Analytics data with Slack (something everyone has access to) could be super valuable for various people in your organization.