I am trying to print out photos from an old Windows98 computer (Dell Inspiron 2400) to a brand new HP8025e Laser printer. Our store has been operating a photo booth on this computer so I don't think I can port it to a newer version of Windows.
We had been using an old HP2200 Business Jet which did the job and had native support under W98, but that finally broke and could not obtain another.
Three questions:
"Properties" under W98 only offers 300dpi, I need better resolution
I did set the printer console to 'photo paper', but when I print the photo on photo paper, it just comes out very overdone and smeary, as if it's using 10 times too much ink. W98 properties doesn't offer a 'photo paper' option
The photos printed somewhat smaller on the old HP (maybe 2 1/2" by 2") whereas they print larger (maybe 4 x 3) on the new 8025, and I need them smaller so all 4 fit in a line on a 3 1/2 x 11 strip of photo paper.
I don't know if this is insoluble (HP support just told me there was no support for W98) but I thought I'd give it a try.
Legacy modernization aims to meet the organization's current business needs by enhancing business agility with new functionality and appealing features, strengthening customer service, and increasing efficiency.
However, modernizing a legacy system is not child's play, as nearly 74% of enterprises fail to complete legacy modernization processes due to a disconnect between technical and leadership teams.
A little thing happened on the way to publishing the CSS :has() selector to the ol’ Almanac. I had originally described :has() as a “forgiving” selector, the idea being that anything in its argument is evaluated, even if one or more of the items is invalid.
/* Example: Do not use! */
article:has(h2, ul, ::-scoobydoo) { }
See ::scoobydoo in there? That’s totally invalid. A forgiving selector list ignores that bogus selector and proceeds to evaluate the rest of the items as if it were written like this:
So, our previous example? The entire selector list is invalid because the bogus selector is invalid. But the other two forgiving selectors, :is() and :where(), are left unchanged.
There’s a bit of a workaround for this. Remember, :is() and :where()are forgiving, even if :has() is not. That means we can nest either of the those selectors in :has() to get more forgiving behavior:
article:has(:where(h2, ul, ::-scoobydoo)) { }
Which one you use might matter because the specificity of :is() is determined by the most specific item in its list. So, if you need to something less specific you’d do better reaching for :where() since it does not add to the specificity score.
We updated a few of our posts to reflect the latest info. I’m seeing plenty of others in the wild that need to be updated, so just a little PSA for anyone who needs to do the same.
Data volumes are more than tripping every year in the industry, which is making data quality a big concern in many organizations. The information is in high demand but organizing it in a way to make it consumable is a challenge many are trying to solve.
Every organization has its own set of database systems and platforms for storing various types of data in the data warehouse. The stored or ingested data is maintained, but most of them are generally behind the curve in creating organization-wide solutions to manage the data.
In recent years, the topic of AI democratization has gained a lot of attention. But what does it really mean, and why is it important? And most importantly, how can we make sure that the democratization of AI is safe and responsible? In this article, we'll explore the concept of AI democratization, how it has evolved, and why it's crucial to closely monitor and manage its use to ensure that it is safe and responsible.
What AI Democratization Used to Be
In the past, AI democratization was primarily associated with "Auto ML" companies and tools. These promised to allow anyone, regardless of their technical knowledge, to build their own AI models. While this may have seemed like a democratization of AI, the reality was that these tools often resulted in mediocre results at best. Most companies realized that to truly derive value from AI, they needed teams of knowledgeable professionals who understood how to build and optimize models.
SmartCrawl version 3.4 adds multiple keyword analysis, additional SEO recommendations, the ability to disable SEO & Readability Analysis in the post list, and more. For free.
SmartCrawl has been SEO optimized from the start, but each new version further improves site performance while boosting your PageRank on Google.
With automated SEO scanning, automatic XML sitemaps, real-time keyword and content analysis, and detailed audits/reports – not to mention one-click recommendations – SmartCrawl lets you create targeted content that ranks at the top of your favorite search engine.
In this post, we’re going to take a closer look at the latest features added to version 3.4, and why they make SmartCrawl even better.
SmartCrawl has had keyword analysis for a while now. It also previously allowed multiple key phrases to be added, but analysis was only done on the first one.
Now, you can analyze your post content for up to three different focus keywords (or phrases). The first keyword entered will be considered primary, while the second and third keywords will be analyzed as secondary.
Doing this is easy. First of all, let’s make sure analysis is turned on. Navigate to SmartCrawl > Settings > General Settings > In-Post Analysis > Visibility, and make sure Page Analysis is toggled on (it will turn blue), then click the Save Changes button at the bottom of the page.
Now, open any Page or Post, and scroll to the SmartCrawl section at the bottom. In the Add Keywords field, enter up to three keywords or phrases, separating each by a comma, then click on the Add Keyword button. (You can enter them individually or all at once.)
SmartCrawl will instantly analyze all of your keywords, showing results directly below them.
Clicking on any of the keywords will put you on its own tab, with details listed beneath.
For each focus keyword, SmartCrawl will give you a list of recommendations to improve the SEO of your post. Suggestions will be made in yellow and gray, while passed audits will be green.
Click on the dropdown arrow to the right of any recommendation to see details specific to it.
If for any reason you decide a certain recommendation isn’t needed, simply click the Ignore button beneath it, and it will stop appearing every time you run the analysis.
As you go through making content adjustments based on SmartCrawl’s recommendations, follow them up with a click of the Refresh button (at the top of SEO section), so you can reanalyze and see what improvements your changes made.
Taxonomy List Status Column
You’ll also find a handy SEO Status column on Category & Taxonomy pages, providing the SEO status for all of your taxonomies.
It’s just a quick way to indicate whether an SEO description has been set, and remind users to craft good SEO descriptions so they do well in search results.
Green check marks mean the SEO description is set and contains the recommended 120-160 characters. Red means a description is missing. Yellow means the description provided is too long/short in length.
You can also hover over any icon in the SEO Status column for a popup with more detailed information.
A Quad of Additional SEO Recommendations
SmartCrawl suggests In-Post SEO Recommendations for every focus keyword that your post content has been analyzed for.
Each of these will click to expand, providing additional information about how to better improve your post SEO.
The list of important recommendations in SmartCrawl was already significant, but we added four more in this version release.
1. Check if the URL contains underscores
Google recommends the use of hyphens over underscores in URLs, stating that hyphens make crawling and interpreting URLs easier for search engines.
2. Check for recommending a hand-crafted meta description
Using best practices for meta descriptions increases the likelihood of your content ranking higher in SERPS. That includes handcrafting your meta description using relevant information about the page content, instead of using the auto-generated one.
3. Primary focus keyword is already used on another post/page
Optimizing more than one post for the same focus keyword confuses search engines and can affect your SEO ranking. SmartCrawl will check to see if your Primary Focus Keyword is used in other Posts/Pages, and then list the 10 most recent ones.
4. Check if all external links are nofollow links
Relevant outbound site links help search engines determine the relevance and quality of your content, improving credibility, authority, and value to users. While having some nofollow links is okay, best practice is to have at least one external dofollow link in your site, so SmartCrawl will check for this.
Disable SEO & Readability Analysis Status
Posts and Pages in SmartCrawl are analyzed one at a time by default, in order to prevent excessive loads on the server.
In the newest version, you now have the ability to completely disable these checks if you prefer. To do so, navigate to SmartCrawl > Settings > General Settings > In-Post Analysis, and toggle the Disable Page Analysis Check on Pages/Posts Screen on (it will turn blue).
If you change this setting, be sure to click the Save Changes button at the bottom of the page.
The SEO Do-all, Be-all, End-all, SmartCrawl
SmartCrawl is built with ease-of-use in mind. Set up is a cinch, with one-click recommendations that improve your PageRank in minutes, each full of details so you can better understand and improve on them.
Now with the newest features, like analyzing multiple keywords at once, even more recommendations that benefit your post SEO, and improved readability analysis, using SmartCrawl on your WordPress site is a win-win-win.
Sign up for a WPMU DEV free membership to take a test run with us. In addition to SmartCrawl, you’ll get Smush and Hummingbird – our two most highly rated (and awarded) plugins for image and performance optimizations – as well as the rest of our popular free plugins.
If you want to up the ante even more, we recommend going with one of our Premium Memberships, which include SmartCrawl Pro (plus the rest of our Pro plugins), along with our exclusive, feature-packed Hub client portal, blazing-fast CDN, and our 24/7/365 five-star support. SmartCrawl Pro adds features like scanning, reports, automatic linking for specific keywords, 404s and multiple redirects.
You can also Host with us, and join the tens of thousands of satisfied WordPressers who see the difference our fully dedicated, fully optimized, and lightning-fast resources make.
However you go, SmartCrawl your way to the top of the search game.
Many weeks ago, when I tried to exit a parking lot, I was—once again—battling with technology as I tried to pay the parking fee. I opened an app and used it to scan the QR payment code on the wall, but it just wouldn’t recognize the code because it was too far away. Thankfully, a parking lot attendant came out to help me complete the payment, sparing me from the embarrassment of the cars behind me beeping their horns in frustration. This made me want to create a QR code scanning app that could save me from such future pain.
The first demo app I created was, truth to be told, a failure. First, the distance between my phone and a QR code had to be within 30 cm, otherwise the app would fail to recognize the code. However, in most cases, this distance is not ideal for a parking lot.
Knowing the full range of CSS selector types available when writing modern CSS today is crucial to using CSS to the fullest. In the past 10 years or so, the official specification has added a number of different CSS selectors and many of those selectors have strong support across all modern browsers.
In this article, I will walk you through OpenShift Container Platform 4.11 cluster setup on Linux. Further, I will be creating a namespace and deployment on OCP cluster.
Prerequisite
Register and create a red hat account using this link: Redhat User Registration Page. Once an account has been created, log on to the red hat developer portal using this link: OCP Cluster Page.
Today I'm going to show you all the things to consider when building the perfect HTML input. Despite its seemingly simple nature, there's actually a lot that goes into it.
How To Make the Control
Well, we need to start somewhere. Might as well start with the control itself.
We’ve zoomed right into 2023, so let’s keep up the pace and talk about making the web speedy.
– Chris Coyier, proffessional newsletter writer.
Uhm but anyway I do have some choice web performance links I’ve been saving up for you this week though.
Turns out HTML is good.
Perhaps my favorite performance-related post of 2022 was Scott Jehl’s Will Serving Real HTML Content Make A Website Faster? Let’s Experiment!. It makes logical sense that a website that serves all the HTML it needs to render upfront will load faster than a website that loads an HTML shell, then loads JavaScript, and the JavaScript makes subsequent requests for the data, which is turned into HTML and then rendered. But it’s nice to see it completely apples to apples. Scott took websites that serve no “useful HTML” up front, and used a new feature of WebPageTest called Opportunities & Experiments to re-run a performance test, but with manipulations in place (namely “useful HTML” instead of the shell HTML).
The result is just a massive improvement in how fast the useful content renders. Server-side rendering (SSR) is just… better.
As always, Scott has a classy caveat:
Now, it’s very important to note that while the examples in this post helpfully display this pattern, the architectural decisions of these sites are made thoughtfully by highly-skilled teams. Web development involves tradeoffs, and in some cases a team may deem the initial performance impact of JS-dependence a worthy compromise for benefits they get in other areas, such as personalized content, server costs and simplicity, and even performance in long-lived sessions. But tradeoffs aside, it’s reasonable to suspect that if these sites were able to deliver meaningful HTML up-front, a browser would be able render initial content sooner and their users would see improvements as a result.
I suspect it’ll be rarer and rarer to see sites that are 100% client rendered. The best developer tooling we have includes SSR these days, so let’s use it.
Fortunately, there are all kinds of tools that point us in this direction anyway. Heavyweights like Next.js, which helps developers build sites in React, is SSR by default. That’s huge. And you can still fetch data with their getServerSideProps concept, retaining the dynamic nature of client-side rendering. Newer tech like Astro is all-in on producing HTML from JavaScript frameworks while helping you do all the dynamic things you’d want, either by running the dynamic stuff server-side or delaying client-side JavaScript until needed.
So if your brain goes well my app needs to make API requests in order to render, well now you have your answer. There are all kinds of tools to help you do those API requests server side. Myself, I’m a fan of making edge servers do those requests for you. Any request the client can do, any other server can do too, only almost certainly faster. And if that allows you to dunk the HTML in the first response, you should.
It’s all about putting yourself in that HTML-First Mental Model, as Noam Rosenthal says. Letting tools do that is a great start, but, brace yourself, not using JavaScript at all is the best possible option. I really like the example Noam puts in the article here. JavaScript libraries have taught us to do stuff like checking to see if we have data and conditionally rendering an empty state if not. But that requires manipulation of the DOM as data changes. That kind of “state” manipulation can be done in CSS as well, by hiding/showing things already in the DOM with the display property. Especially now with :has() (last week’s CodePen Challenge, by the way), this kind of state checking is getting trivial to do.
Harry Roberts digs into a somewhat old JavaScript loading pattern in Speeding Up Async Snippets. Have you ever seen this?
<script>
var script = document.createElement('script');
script.src = 'https://third-party.io/bundle.min.js';
document.head.appendChild(script);
</script>
We’re getting over a decade in major browsers where that pattern just isn’t needed anymore, and worse, it’s bad for performance, the very thing it’s trying to help with:
For all the resulting script is asynchronous, the <script> block that creates it is fully synchronous, which means that the discovery of the script is governed by any and all synchronous work that happens before it, whether that’s other synchronous JS, HTML, or even CSS. Effectively, we’ve hidden the file from the browser until the very last moment, which means we’re completely failing to take advantage of one of the browser’s most elegant internals… the Preload Scanner.
As Harry says:
…we can literally just swap this out for the following in the same location or later in your HTML:
Gotta preserve that Aspect ratio on images before loading.
It’s worth shouting from the rooftops: put width and height attributes on your <img> tags in HTML. This allows the browser to preserve space for them while they load and prevent content from jerking around when they load. This is a meaningful performance benefit.
It’s actually just the aspect-ratio of those two numbers that actually matters. So even if you won’t actually display the image at the size indicated by the attributes (99% chance you won’t, because you’ll restrict the maximum width of the image), the browser will still reserve the correct amount of space.
You can achieve the same effect with aspect-ratio in CSS, but in practice, we’re talking about content <img>s here, which are usually of arbitrary size, so the correct place for this information is in the HTML. There is a little nuance to it as you might imagine, which Jake covers well.
JPEG XL, we hardly knew thee.
Speaking of images, it looks like Chrome threw in the towel on supporting the JPEG XL format for now. It was behind a flag to begin with, so no real ecosystem harm or anything. They essentially said: flags are flags, they aren’t meant to live forever. We would have shipped it, but it’s just not good enough to warrant the maintenance, and besides no other browser was that into it either.
That actually sounds pretty reasonable of Chrome at first glance. But Jon Sneyers has a full-throated response in The Case for JPEG XL to everything aside from the flags shouldn’t last forever thing. I’d consider these things pretty darn excellent:
JPEG XL can be used to re-encode any existing JPEG entirely losslessly and have it be 20% smaller on average. There are a boatload of JPEGs on the internet, so this seems big.
JPEG XL can do the progressive-loading thing, where once 15% of the data has arrived, it can show a low-res version of itself. The popularity of the “blur up” technique proves this is desirable.
JPEG XL is fast to encode. This seems significant because the latest hot new image format, AVIF, is quite the opposite.
So even if JPEG XL wasn’t much of a leap forward in compressed size, it still seems worth supporting. But Jon is saying “in the quality range relevant to the web”, JPEG is 10-15% better than even AVIF.
Of course, Cloudinary is incentivized to push for new formats, because part of its business model is helping people take advantage of new image formats.
Do u Want to customize your Chromebook wallpaper? Always choose a high-quality wallpaper that enhances the appearance and style of your Chromebook. If you are not sure where to start, check out some incredible wallpapers for chromebook which we have shared below. Cool Wallpapers For Chromebook By taking the time to research and find a […]
Have you ever found yourself searching for what should I draw? If you have been doing line work, realistic portrait sketches, or urban sketching and would like to try something different, you are at the right place. Get your creative juices flowing with this list of easy drawing ideas for beginners and experts. The only […]
I will run a book giveaway promotion on the Code Ranch on January 17th. Be sure to be there and let your friends know. It would be great to answer your questions about debugging. I'm very excited by this and by the feedback I'm getting for the course and new videos.
I also launched a free new Java course for complete beginners. No prior knowledge needed. This is probably not the audience for this course. But if you know someone that might be interested I'll appreciate a share. I hope people find it useful and learn a bit about Java. I'm working on a cadence of one video per week in this beginner course.
Doodle art is a beautiful way to explore your creativity. Doodles can be a great way to improve drawing skills, quiet the mind, and help you focus on whatever you have to do. There is no need to spend a lot of time on this one. Just paper and pencil are all you need. Here […]
Mixed content is a mix of secure and non-secure resources you can find on a webpage. When a page tries to use images, CSS, and other unsafe resources, it results in mixed content. Your resources...
This has been a spectacularly intensive week. The new YouTube channel carrying the course is exploding with subscriptions, and it's just entering its 3rd week. The course website is now live. You can see the entire course there, although I'm adding videos all the time and did roughly 1/3 of the work. Right now, it has 3 hours and 17 minutes of content. I have another hour of content ready to add, and I'm working on several more. I'm pretty sure the course will be well over 6 hours when completed.
Live from Interact, we're bringing you an interview with our favorite CTO, Charity Majors.
Never one to be shy about speaking her mind, Charity is an outspoken advocate for devs everywhere — and this passion made her a fan favorite at Interact.