Fun fact: both HN and (no doubt not coincidentally) paulgraham.com ship no DOCTYPE and are rendered in Quirks Mode. You can see this in devtools by evaluating `document.compatMode`.
I ran into this because I have a little userscript I inject everywhere that helps me copy text in hovered elements (not just links). It does:
[...document.querySelectorAll(":hover")].at(-1)
to grab the innermost hovered element. It works fine on standards-mode pages, but it's flaky on quirks-mode pages.
Question: is there any straightforward & clean way as a user to force a quirks-mode page to render in standards mode? I know you can do something like:
I wish `dang` would take some time to go through the website and make some usability updates. HN still uses a font-size value that usually renders to 12px by default as well, making it look insanely small on most modern devices, etc.
At quick glance, it looks like they're still using the same CSS that was made public ~13 years ago:
Setting aside the relative merits of 12pt vs 16pt font, websites ought to respect the user's browser settings by using "rem", but HN ignores this.
To test, try setting your browser's font size larger or smaller and note which websites update and which do not. And besides helping to support different user preferences, it's very useful for accessibility.
No kidding. I've set the zoom level so long ago that I never noticed, but if I reset it on HN the text letters use about 2mm of width in my standard HD, 21" display.
> but if I reset it on HN the text letters use about 2mm of width in my standard HD, 21" display.
1920x1080 24" screen here, .274mm pitch which is just about 100dpi. Standard text size in HN is also about 2mm across, measured by the simple method of holding a ruler up to the screen and guessing.
If you can't read this, you maybe need to get your eyes checked. It's likely you need reading glasses. The need for reading glasses kind of crept up on me because I either work on kind of Landrover-engine-scale components, or grain-of-sugar-scale components, the latter viewed down a binocular microscope on my SMD rework bench and the former big enough to see quite easily ;-)
> HN still uses a font-size value that usually renders to 12px by default as well, making it look insanely small on most modern devices, etc.
On what devices (or browsers?) it renders "insanely small" for you? CSS pixels are not physical pixels, they're scaled to 1/96th of an inch on desktop computers, for smartphones etc. scaling takes into account the shorter typical distance between your eyes and the screen (to make the angular size roughly the same), so one CSS pixel can span multiple physical pixels on a high-PPI display. Font size specified in px should look the same on various devices. HN font size feels the same for me on my 32" 4k display (137 PPI), my 24" display with 94 PPI, and on my smartphone (416 PPI).
I trust dang a lot; but in general I am scared of websites making "usability updates."
Modern design trends are going backwards. Tons of spacing around everything, super low information density, designed for touch first (i.e. giant hit-targets), and tons of other things that were considered bad practice just ten years ago.
So HN has its quirks, but I'd take what it is over what most 20-something designers would turn it into. See old.reddit Vs. new.reddit or even their app.
Overall I would agree but I also agree with the above commenter. It’s ok for mobile but on a desktop view it’s very small when viewed at anything larger than 1080p. Zoom works but doesn’t stick. A simple change to the font size in css will make it legible for mobile, desktop, terminal, or space… font-size:2vw or something that scales.
> At quick glance, it looks like they're still using the same CSS that was made public ~13 years ago:
It has been changed since then for sure though. A couple of years ago the mobile experience was way worse than what it is today, so something has clearly changed. I think also some infamous "non-wrapping inline code" bug in the CSS was fixed, but can't remember if that was months, years or decades ago.
On another note, they're very receptive to emails, and if you have specific things you want fixed, and maybe even ideas on how to do in a good and proper way, you can email them (hn@ycombinator.com) and they'll respond relatively fast, either with a "thanks, good idea" or "probably not, here's why". That has been my experience at least.
Please don’t. HN has just the right information density with its small default font size. In most browsers it is adjustable. And you can pinch-zoom if you’re having trouble hitting the right link.
None of the ”content needs white space and large fonts to breathe“ stuff or having to click to see a reply like on other sites. That just complicates interactions.
And I am posting this on an iPhone SE while my sight has started to degrade from age.
Yeah, I'm really asking for tons of whitespace and everything to breathe sooooo much by asking for the default font size to be a browser default (16px) and updated to match most modern display resolutions in 2025, not 2006 when it was created.
HN is the only site I have to increase the zoom level, and others below are doing the same thing as me. But it must be us with the issues. Obviously PG knew best in 2006 for decades to come.
You're obviously being sarcastic, but I don't think that it's a given that "those are old font-size defaults" means "those are bad font-size defaults." I like the default HN size. There's no reason that my preference should override yours, but neither is there any reason that yours should override mine, and I think "that's how the other sites are" intentionally doesn't describe the HN culture, so it need not describe the HN HTML.
12 px (13.333 px when in the adapted layout) is a little small - and that's a perfectly valid argument without trying to argue we should abandon absolute sized fonts in favor of feels.
There is no such thing as a reasonable default size if we stop calibrating to physical dimensions. If you choose to use your phone at a scaling where what is supposed to be 1" is 0.75" then that's on you, not on the website to up the font size for everyone.
I find it exactly the right size on both PC and phone.
There's a trend to make fonts bigger but I never understood why. Do people really have trouble reading it?
I prefer seeing more information at the same time, when I used Discord (on PC), I even switched to IRC mode and made the font smaller so that more text would fit.
I'm assuming you have a rather small resolution display? On a 27" 4k display, scaled to 150%, the font is quite tiny, to the point where the textarea I currently type this in (which uses the browsers default font size) is about 3 times the perceivable size in comparison to the HN comments themselves.
Yup, and these days we have relative units in CSS such that we no longer need to hardcode pixels, so everyone wins (em, rem). That way people can get usability according to the browsers defaults, which make the whole thing user configurable.
OTOH, people with 30+ inch screens probably sit a bit further away to be able to see everything without moving their head so it makes sense that even sites which take DPI into account use larger fonts because it's not really about how large something is physically on the screen but about the angular size relative to the eye.
Yeah, one of the other cousin comments mentions 36 inches away. I don't think they realize just how far outliers they are. Of course you have to make everything huge when your screen is so much further away than normal.
I have mild vision issues and have to blow up the default font size quite a bit to read comfortably. Everyone has different eyes, and vision can change a lot with age.
On that subject I would be fine if the browser always rendered in standard mode. or offered a user configuration option to do so.
No need to have the default be compatible with a dead browser.
further thoughts: I just read the mdn quirks page and perhaps I will start shipping Content-Type: application/xhtml+xml as I don't really like putting the doctype in. It is the one screwball tag and requires special casing in my otherwise elegant html output engine.
There is a better option, but generally the answer is "no"; the best solution would be for WHATWG to define document.compatMode to be writable property instead of readonly.
The better option is to create and hold a reference to the old nodes (as easy as `var old = document.documentElement`) and then after blowing everything away with document.write (with an empty* html element; don't serialize the whole tree), re-insert them under the new document.documentElement.
* Note that your approach doesn't preserve the attributes on the html element; you can fix this by either pro-actively removing the child nodes before the document.write call and rely on document.documentElement.outerHTML to serialize the attributes just as in the original, or you can iterate through the old element's attributes and re-set them one-by-one.
that will ensure screenreaders skip all your page "chrome" and make life much easier for a lot of folks. As a bonus mark any navigation elements inside main using <nav> (or role="navigation").
I’m not a blind person but I was curious about once when I tried to make a hyper-optimized website. It seemed like the best way to please screen readers was to have the navigation HTML come last, but style it so it visually comes first (top nav bar on phones, left nav menu on wider screens).
Props to you for taking the time to test with a screen reader, as opposed to simply reading about best practices. Not enough people do this. Each screen reader does things a bit differently, so testing real behavior is important. It's also worth noting that a lot of alternative input/output devices use the same screen reader protocols, so it's not only blind people you are helping, but anyone with a non-traditional setup.
Navigation should come early in document and tab order. Screen readers have shortcuts for quickly jumping around the page and skipping things. It's a normal part of the user experience. Some screen readers and settings de-prioritize navigation elements in favor of reading headings quickly, so if you don't hear the navigation right away, it's not necessarily a bug, and there's a shortcut to get to it. The most important thing to test is whether the screen reader says what you expect it to for dynamic and complex components, such as buttons and forms, e.g. does it communicate progress, errors, and success? It's usually pretty easy to implement, but this is where many apps mess up.
Wouldn’t that run afoul of other rules like keeping visual order and tab order the same? Screen reader users are used to skip links & other standard navigation techniques.
Just to say, that makes your site more usable in text browsers too, and easier to interact with the keyboard.
I remember HTML has an way to create global shortcuts inside a page, so you press a key combination and the cursor moves directly to a pre-defined place. If I remember that right, it's recommended to add some of those pointing to the menu, the main content, and whatever other relevant area you have.
Not a front end engineer but I imagine this boilerplate allows the JavaScript display engine of choice to be loaded and then rendered into that DIV rather than having any content on the page itself.
It's because "modern" web developers are not writing web pages in standard html, css or js. Instead, they use javascript to render the entire thing inside a root element.
This is now "standard" but breaks any browser that doesn't (or can't) support javascript. It's also a nightmare for SEO, accessibility and many other things (like your memory, cpu and battery usage).
TFA itself has an incorrect DOCTYPE. It’s missing the whitespace between "DOCTYPE" and "html". Also, all spaces between HTML attributes where removed, although the HTML spec says: "If an attribute using the double-quoted attribute syntax is to be followed by another attribute, then there must be ASCII whitespace separating the two." (https://html.spec.whatwg.org/multipage/syntax.html#attribute...) I guess the browser gets it anyway. This was probably automatically done by an HTML minifier. Actually the minifier could have generated less bytes by using the unquoted attribute value syntax (`lang=en-us id=top` rather than `lang="en-us"id="top"`).
Edit: In the `minify-html` Rust crate you can specify "enable_possibly_noncompliant", which leads to such things. They are exploiting the fact that HTML parsers have to accept this per the (parsing) spec even though it's not valid HTML according to the (authoring) spec.
Maybe a dumb question but I have always wondered, why does the (authoring?) spec not consider e.g. "doctypehtml" as valid HTML if compliant parsers have to support it anyway? Why allow this situation where non-compliant HTML is guaranteed to work anyway on a compliant parser?
It's considered a parse error [0]: it basically says that a parser may reject the document entirely if it occurs, but if it accepts the document, then it must act as if a space is present. In practice, browsers want to ignore all parse errors and accept as many documents as possible.
I'm not a web developer, so if someone can please enlighten me: Why does this site, and so many "modern" sites like it have it so that the actual content of the site takes up only 20% of my screen?
My browser window is 2560x1487. 80% of the screen is blank. I have to zoom in 170% to read the content. With older blogs, I don't have this issue, it just works. Is it on purpose or it is it bad css? Given the title of the post, i think this is somewhat relevant.
You'll notice newspapers use columns and do not extend the text all the way left to right either. It's a typographical consideration, for both function and style.
From a functional standpoint: Having to scan your eyes left to right a far distance to read makes it more uncomfortable. Of course, you could debate this and I'm sure there are user preferences, but this is the idea behind limiting the content width.
From a stylistic standpoint: It just looks bad if text goes all the way from the left to right because the paragraph looks "too thin" like "not enough weight" and "too much whitespace." I can't really explain this any further: I think it looks bad and a lot of people think it looks bad. Like picking colors combinations, the deciding factor isn't any rule: it's just "does it look ugly?" and then increase or decrease the width.
In the case of the site in question, the content width is really small. However, if you notice, each paragraph has very few words so it may have been tightened up for style reasons. I would have made the same choice.
That said, if you have to zoom in 170% to read the content and everything else is not also tiny on your screen, it may be bad CSS.
Probably to not have incredibly wide paragraphs. I will say though, I set my browser to always display HN at like 150% zoom or something like that. They definitely could make the default font size larger. On mobile it looks fine though.
I have HN on 170% zoom too. this a bad design pattern. I shouldn't have to zoom in on every site. Either increasing the font or making sure the content is always at least 50% of the page would be great for me.
I understand not using the full width, but unless you zoom in, it feels like I'm viewing tiny text on a smart phone in portrait mode.
You would think browsers themselves would handle the rest, if the website simply specified "center the content div with 60% width" or something like that.
37 Signals [0] famously uses their own Stimulus [1] framework on most of their products. Their CEO is a proponent of the whole no-build approach because of the additional complexity it adds, and because it makes it difficult for people to pop your code and learn from it.
It's impossible to look at a Stimulus based site (or any similar SSR/hypermedia app) and learn anything useful beyond superficial web design from them because all of the meaningful work is being done on the other side of the network calls. Seeing a "data-action" or a "hx-swap" in the author's original text doesn't really help anyone learn anything without server code in hand. That basically means the point is moot because if it's an internal team member or open source wanting to learn from it, the original source vs. minified source would also be available.
It's also more complex to do JS builds in Ruby when Ruby isn't up to the task of doing builds performantly and the only good option is calling out to other binaries. That can also be viewed from the outside as "we painted ourselves into a corner, and now we will discuss the virtues of standing in corners". Compared to Bun, this feels like a dated perspective.
DHH has had a lot of opinions, he's not wrong on many things but he's also not universally right for all scenarios either and the world moved past him back in like 2010.
Even with TS, if I’m doing web components rather than a full framework I prefer not bundling. That way I can have each page load the exact components it needs. And with http/2 I’m happy to have each file separate. Just hash them and set an immutable cache header so it even when I make changes the users only have to pull the new version of things that actually changed.
Can't say I generally agree with dropping TS for JS but I suppose it's easier to argue when you are working on smaller projects. But here is someone that agrees with you with less qualification than that https://world.hey.com/dhh/turbo-8-is-dropping-typescript-701...
I was introduced to this decision from the Lex Fridman/DHH podcast. He talked a lot about typescript making meta programming very hard. I can see how that would be the case but I don't really understand what sort of meta programming you can do with JS. The general dynamic-ness of it I get.
There is some irony in then-Facebook's proprietary metadata lines being in there (the "og:..." lines). Now with their name being "Meta", it looks even more proprietary than before.
Maybe the name was never about the Metaverse at all...
> A technical standard may be developed privately or unilaterally, for example by a corporation, regulatory body, military, etc.
PDF is now an international standard (ISO 32000) but it was invented by Adobe. HTML was invented at the CERN and is now controlled by W3C (a private consortium). OpenGL was created by SGI and is maintained by the Khronos Group.
All had different "ownership" paths and yet I'd say all of them are standards.
Note that <html> and <body> auto-close and don't need to be terminated.
Also, wrapping the <head> tags in an actual <head></head> is optional.
You also don't need the quotes as long the attribute doesn't have spaces or the like; <html lang=en> is OK.
(kind of pointless as the average website fetches a bazillion bytes of javascript for every page load nowadays, but sometimes slimming things down as much as possible can be fun and satisfying)
This kind of thing will always just feel shoddy to me. It is not much work to properly close a tag. The number of bytes saved is negligible, compared to basically any other aspect of a website. Avoiding not needed div spam already would save more. Or for example making sure CSS is not bloated. And of course avoiding downloading 3MB of JS.
What this achieves is making the syntax more irregular and harder to parse. I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient. It would greatly simplify browser code and HTML spec.
Implicit elements and end tags have been a part of HTML since the very beginning. They introduce zero ambiguity to the language, they’re very widely used, and any parser incapable of handling them violates the spec and would be incapable of handling piles of real‐world strict, standards‐compliant HTML.
> I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient.
Well, to parsing it for machines yes, but for humans writing and reading it they are helpful. For example, if you have
<p> foo
<p> bar
and change it to
<div> foo
<div> bar
suddenly you've got a syntax error (or some quirks mode rendering with nested divs).
The "redundancy" of closing the tags acts basically like a checksum protecting against the "background radiation" of human editing.
And if you're writing raw HTML without an editor that can autocomplete the closing tags then you're doing it wrong anyway.
Yes that used to be common before and yes it's a useful backwards compatibility / newbie friendly feature for the language, but that doesn't mean you should use it if you know what you're doing.
It sounds like you're headed towards XHTML. The rise and fall of XHTML is well documented and you can binge the whole thing if you're so inclined.
But my summarization is that the reason it doesn't work is that strict document specs are too strict for humans. And at a time when there was legitimate browser competition, the one that made a "best effort" to render invalid content was the winner.
The merits and drawbacks of XHTML has already been discussed elsewhere in the thread and I am well aware of it.
> And at a time when there was legitimate browser competition, the one that made a "best effort" to render invalid content was the winner.
Yes, my point is that there is no reason to still write "invalid" code just because it's supported for backwards compatibility reasons. It sounds like you ignored 90% of my comment, or perhaps you replied to the wrong guy?
I'm a stickling pedant for HTML validity, but close tags on <p> and <li> are optional by spec. Close tags for <br>, <img>, and <hr> are prohibited. XML-like self-closing trailing slashes explicitly have no meaning in XML.
Close tags for <script> are required. But if people start treating it like XML, they write <script src="…" />. But that fails, because the script element requires closure, and that slash has no meaning in XML.
I think validity matters, but you have to measure validity according to the actual spec, not what you wish it was, or should have been. There's no substitute for actually knowing the real rules.
Are you misunderstanding on purpose? I am aware they are optional. I am arguing that there is no reason to omit them from your HTML. Whitespace is (mostly) optional in C, does that mean it's a good idea to omit it from your programs? Of course a br tag needs no closing tag because there is no content inside it. How exactly is that an argument for omitting the closing p tag? The XML standard has no relevance to the current discussion because I'm not arguing for "starting to treat it like XML".
I'm beginning to think I'm misunderstanding, but it's not on purpose.
Including closing tags as a general rule might make readers think that they can rely on their presence. Also, in some cases they are prohibited. So you can't achieve a simple evenly applied rule anyway.
I didn't have a problem with XHTML back in the day; it tool a while to unlearn it; I would instinctively close those tags: <br/>, etc.
It actually the XHTML 2.0 specification [1] that discarded backwards compatibility with HTML 4 was the straw that broke the camel's back. No more forms as we knew them, for example; we were supposed to use XFORMS.
That's when WHATWG was formed and broke with the W3C and created HTML5.
I mean, I am obviously talking about a fictive scenario, a somewhat better timeline/universe. In such a scenario, the shoddy practices of not properly closing tags and leaning on leniency in browser parsing and sophisticated fallbacks and all that would not have become a practice and those many currently valid websites would mostly not have been created, because as someone tried to create them, the browsers would have told them no. Then those people would revise their code, and end up with clean, easier to parse code/documents, and we wouldn't have all these edge and special cases in our standards.
Also obviously that's unfortunately not the case today in our real world. Doesn't mean I cannot wish things were different.
> It would greatly simplify browser code and HTML spec.
I doubt it would make a dent - e.g. in the "skipping <head>" case, you'd be
replacing the error recovery mechanism of "jump to the next insertion mode"
with "display an error", but a) you'd still need the code path to handle
it, b) now you're in the business of producing good error messages which
is notoriously difficult.
Something that would actually make the parser a lot simpler is removing
document.write, which has been obsolete ever since the introduction of the
DOM and whose main remaining real world use-case seems to be ad delivery.
(If it's not clear why this would help, consider that document.write can
write scripts that call document.write, etc.)
oh man, I wish XHTML had won the war. But so many people (and CMSes) were creating dodgy markup that simply rendered yellow screens of doom, that no-one wanted it :(
i'm glad it never caught on. the case sensitivity (especially for css), having to remember the xmlns namespace URI in the root element, CDATA sections for inline scripts, and insane ideas from companies about extending it further with more xml namespaced elements... it was madness.
It had too much unnecessary metadata yes, but case insensitivity is always the wrong way to do stuff in programming (e.g. case insensitive file system paths). The only reason you'd want it is for real-world stuff like person names and addresses etc. There's no reason you'd mix the case of your CSS classes anyway, and if you want that, why not also automatically match camelCase with snake_case with kebab-case?
The fact XHTML didn't gain traction is a mistake we've been paying off for decades.
Browser engines could've been simpler; web development tools could've been more robust and powerful much earlier; we would be able to rely on XSLT and invent other ways of processing and consuming web content; we would have proper XHTML modules, instead of the half-baked Web Components we have today. Etc.
Instead, we got standards built on poorly specified conventions, and we still have to rely on 3rd-party frameworks to build anything beyond a toy web site.
Stricter web documents wouldn't have fixed all our problems, but they would have certainly made a big impact for the better.
And add:
Yes, there were some initial usability quirks, but those could've been ironed out over time. Trading the potential of a strict markup standard for what we have today was a colossal mistake.
There's no way it could have gained traction. Consider two browsers. One follows the spec explicitly, and one goes into "best-effort" mode on encountering invalid markup. End users aren't going to care about the philosophical reasoning for why Browser A doesn't show them their school dance recital schedule.
Consider JSON and CSV. Both have formal specs. But in the wild, most parsers are more lenient than the spec.
Yeah this is it. We can debate what would be nicer theoretically until the cows come home but there's a kind of real world game theory that leads to browsers doing their best to parse all kinds of slop as well as they can, and then subsequently removing the incentive for developers and tooling to produce byte perfect output
Yeah, I remember, when I was at school and first learning HTML and this kind of stuff. When I stumbled upon XHTML, I right away adapted my approach to verify my page as valid XHTML. Guess I was always on this side of things. Maybe machine empathy? Or also human empathy, because someone needs to write those parsers and the logic to process this stuff.
I agree for sure, but that's a problem with the spec, not the website. If there are multiple ways of doing something you might as well do the minimal one. The parser will have always to be able to handle all the edge cases no matter what anyway.
You might want always consistently terminate all tags and such for aesthetic or human-centered (reduced cognitive load, easier scanning) reasons though, I'd accept that.
<html>, <head> and <body> start and end tags are all optional. In practice, you shouldn’t omit the <html> start tag because of the lang attribute, but the others never need any attributes. (If you’re putting attributes or classes on the body element, consider whether the html element is more appropriate.) It’s a long time since I wrote <head>, </head>, <body>, </body> or </html>.
`<thead>` and `<tfoot>`, too, if they're needed. I try to use all the free stuff that HTML gives you without needing to reach for JS. It's a surprising amount. Coupled with CSS and you can get pretty far without needing anything. Even just having `<template>` with minimal JS enables a ton of 'interactivity'.
It's time for an "en-INTL" (or similar) for international english, that is mostly "en-US", but implies a US-International keyboard and removes americanisms, like Logical Punctuation in quotes [1]. Then AI can start writing for a wider and much larger public (and can also default to regular ISO units instead of imperial baby food).
Additionally, it's kind of crazy we are not able to write any language with any keyboard, as nowadays we just don't know the idiom the person who sits behind the keyboard needs.
From what I can tell this allows some screen readers to select specific accents. Also the browser can select the appropriate spell checker (US English vs British English).
I appreciate this post! I was hoping you would add an inline CSS style sheet to take care of the broken defaults. I only remember one off the top of my head, the rule for monospace font size. You need something like:
But I vaguely remember there are other broken CSS defaults for links, img tags, and other stuff. An HTML 5 boilerplate guide should include that too, but I don't know of any that do.
Don’t need the “.0”. In fact, the atrocious incomplete spec of this stuff <https://www.w3.org/TR/css-viewport-1/> specifies using strtod to parse the number, which is locale dependent, so in theory on a locale that uses a different decimal separator (e.g. French), the “.0” will be ignored.
I have yet to test whether <meta name="viewport" content="width=device-width,initial-scale=1.5"> misbehaves (parsing as 1 instead of 1½) with LC_NUMERIC=fr_FR.UTF-8 on any user agents.
Not to mention the functions are also translated to the other language. I think both these are the fault of Excel to be honest. I had this problem long before Google came around.
And it's really irritating when you have the computer read something out to you that contains numbers. 53.1 km reads like you expect but 53,1 km becomes "fifty-three (long pause) one kilometer".
> Not to mention the functions are also translated to the other language.
This makes a lot of sense when you recognize that Excel formulas, unlike proper programming languages, aren't necessarily written by people with a sufficient grasp of the English language, especially when it comes to more abstract mathematical concepts, which aren't taught in secondary English language classes at school, but it in their native language mathematics classes.
Not sure if this still is the case, but Excel used to fail to open CSV files correctly if the locale used another list separator than ',' – for example ';'.
Sometimes you double click and it opens everything just fine and silently corrupts and changes and drops data without warning or notification and gives you no way to prevent it.
The day I found that Intellij has a built in CSV tabular editor and viewer was the best day.
Given that world is about evenly split on the decimal separator [0] (and correspondingly on the thousands grouping separator), it’s hard to avoid. You could standardize on “;” as the argument separator, but “1,000” would still remain ambiguous.
aha, in Microsoft Excel they translate even the shortcuts. The Brazilian version Ctrl-s is "Underline" instead of "Save". Every sheet of mine ends with a lot of underlined cells :-)
The behaviour predates Google Sheets and likely comes from Excel (whose behavior Sheets emulate/reverse engineer in many places). And I wouldn't be surprised if Excel got it from Lotus.
Quirks quirks aside there are other ways to tame old markup...
If a site won't update itself you can... use a user stylesheet or extension to fix things like font sizes and colors without waiting for the maintainer...
BUT for scripts that rely on CSS behaviors there is a simple check... test document.compatMode and bail when it's not what you expect... sometimes adding a wrapper element and extracting the contents with a Range keeps the page intact...
ALSO adding semantic elements and ARIA roles goes a long way for accessibility... it costs little and helps screen readers navigate...
Would love to see more community hacks that improve usability without rewriting the whole thing...
> <!doctype html> is what you want for consistent rendering. Or <!DOCTYPE HTML> if you prefer writing markup like it’s 1998. Or even <!doCTypE HTml> if you eschew all societal norms. It’s case-insensitive so they’ll all work.
I tend to lower-case all my HTML because it has less entropy and therefore can be compressed more effectively.
But in case of modern compression algorithms, some of them come with a pre-defined dictionary for websites. These usually contain the common stuff like <!DOCTYPE html> in its most used form. So doing it like everybody else might even make the compression even more effective.
I still don’t understand what people think they’re accomplishing with the lang attribute. It’s trivial to determine the language, and in the cases where it isn’t, it’s not trivial for the reader, either.
It states the cargo culted reasons, but not the actual truth.
1) Pronounciation is either solved by a) automatic language detection, or b) doesn't matter. If I am reading a book, and I see text in a language I recognize, I will pronounce it correctly, just like the screen reader will. If I see text in a language I don't recognize, I won't pronounce it correctly, and neither will the screen reader. There's no benefit to my screen reader pronouncing Hungarian correctly to me, a person who doesn't speak Hungarian. On the off chance that the screen reader gets it wrong, even though I do speak Hungarian, I can certainly tell that I'm hearing english-pronounced hungarian. But there's no reason that the screen reader will get it wrong, because "Mit csináljunk, hogy boldogok legyünk?" isn't ambiguous. It's just simply Hungarian, and if I have a Hungarian screen reader installed, it's trivial to figure that out.
2) Again, if you can translate it, you already know what language it is in. If you don't know what language it is in, then you can't read it from a book, either.
3) See above. Locale is mildly useful, but the example linked in the article was strictly language, and spell checking will either a) fail, in the case of en-US/en-UK, or b) be obvious, in the case of 1) above.
Your whole comment assumes language identification is both trivial and fail-safe. It is neither and it can get worse if you consider e.g. cases where the page has different elements in different languages, different languages that are similar.
Even if language identification was very simple, you're still putting the burden on the user's tools to identify something the writer already knew.
I spent about half an hour trying to figure out why some JSON in my browser was rendering è incorrectly, despite the output code and downloaded files being seemingly perfect. I came to the conclusion that the browsers (Safari and Chrome) don't use UTF-8 as the default renderer for everything and moved on.
Another funny thing here is that they say “but not limited to” (the listed encodings), but then say “must not support other encodings” (than the listed ones).
> the encodings defined in Encoding, including, but not limited to
where "Encoding" refers to https://encoding.spec.whatwg.org (probably that
should be a link.) So it just means "the other spec defines at least these,
but maybe others too." (e.g. EUC-JP is included in Encoding but not listed
in HTML.)
When sharing this post on his social media accounts, Jim prefixed the link with: 'Sometimes its cathartic to just blog about really basic, (probably?) obvious stuff'
Every day you can expect 10000 people learning a thing you thought everyone knew: https://xkcd.com/1053/
To quote the alt text: "Saying 'what kind of an idiot doesn't know about the Yellowstone supervolcano' is so much more boring than telling someone about the Yellowstone supervolcano for the first time."
I had a teacher who became angry when a question was asked about a subject he felt students should already be knowledgeable about. "YOU ARE IN xTH GRADE AND STILL DON'T KNOW THIS?!" (intentional shouting uppercase). The fact that you learned it yesterday doesn't mean all humans in the world also learned it yesterday. Ask questions, always. Explain, always.
Fun fact: both HN and (no doubt not coincidentally) paulgraham.com ship no DOCTYPE and are rendered in Quirks Mode. You can see this in devtools by evaluating `document.compatMode`.
I ran into this because I have a little userscript I inject everywhere that helps me copy text in hovered elements (not just links). It does:
[...document.querySelectorAll(":hover")].at(-1)
to grab the innermost hovered element. It works fine on standards-mode pages, but it's flaky on quirks-mode pages.
Question: is there any straightforward & clean way as a user to force a quirks-mode page to render in standards mode? I know you can do something like:
document.write("<!DOCTYPE html>" + document.documentElement.innerHTML);
but that blows away the entire document & introduces a ton of problems. Is there a cleaner trick?
I wish `dang` would take some time to go through the website and make some usability updates. HN still uses a font-size value that usually renders to 12px by default as well, making it look insanely small on most modern devices, etc.
At quick glance, it looks like they're still using the same CSS that was made public ~13 years ago:
https://github.com/wting/hackernews/blob/5a3296417d23d1ecc90...
Setting aside the relative merits of 12pt vs 16pt font, websites ought to respect the user's browser settings by using "rem", but HN ignores this.
To test, try setting your browser's font size larger or smaller and note which websites update and which do not. And besides helping to support different user preferences, it's very useful for accessibility.
No kidding. I've set the zoom level so long ago that I never noticed, but if I reset it on HN the text letters use about 2mm of width in my standard HD, 21" display.
> but if I reset it on HN the text letters use about 2mm of width in my standard HD, 21" display.
1920x1080 24" screen here, .274mm pitch which is just about 100dpi. Standard text size in HN is also about 2mm across, measured by the simple method of holding a ruler up to the screen and guessing.
If you can't read this, you maybe need to get your eyes checked. It's likely you need reading glasses. The need for reading glasses kind of crept up on me because I either work on kind of Landrover-engine-scale components, or grain-of-sugar-scale components, the latter viewed down a binocular microscope on my SMD rework bench and the former big enough to see quite easily ;-)
> HN still uses a font-size value that usually renders to 12px by default as well, making it look insanely small on most modern devices, etc.
On what devices (or browsers?) it renders "insanely small" for you? CSS pixels are not physical pixels, they're scaled to 1/96th of an inch on desktop computers, for smartphones etc. scaling takes into account the shorter typical distance between your eyes and the screen (to make the angular size roughly the same), so one CSS pixel can span multiple physical pixels on a high-PPI display. Font size specified in px should look the same on various devices. HN font size feels the same for me on my 32" 4k display (137 PPI), my 24" display with 94 PPI, and on my smartphone (416 PPI).
On my MacBook it's not "insanely small", but I zoom to 120% for a much better experience. I can read it just fine at the default.
I trust dang a lot; but in general I am scared of websites making "usability updates."
Modern design trends are going backwards. Tons of spacing around everything, super low information density, designed for touch first (i.e. giant hit-targets), and tons of other things that were considered bad practice just ten years ago.
So HN has its quirks, but I'd take what it is over what most 20-something designers would turn it into. See old.reddit Vs. new.reddit or even their app.
Overall I would agree but I also agree with the above commenter. It’s ok for mobile but on a desktop view it’s very small when viewed at anything larger than 1080p. Zoom works but doesn’t stick. A simple change to the font size in css will make it legible for mobile, desktop, terminal, or space… font-size:2vw or something that scales.
> At quick glance, it looks like they're still using the same CSS that was made public ~13 years ago:
It has been changed since then for sure though. A couple of years ago the mobile experience was way worse than what it is today, so something has clearly changed. I think also some infamous "non-wrapping inline code" bug in the CSS was fixed, but can't remember if that was months, years or decades ago.
On another note, they're very receptive to emails, and if you have specific things you want fixed, and maybe even ideas on how to do in a good and proper way, you can email them (hn@ycombinator.com) and they'll respond relatively fast, either with a "thanks, good idea" or "probably not, here's why". That has been my experience at least.
Please don’t. HN has just the right information density with its small default font size. In most browsers it is adjustable. And you can pinch-zoom if you’re having trouble hitting the right link.
None of the ”content needs white space and large fonts to breathe“ stuff or having to click to see a reply like on other sites. That just complicates interactions.
And I am posting this on an iPhone SE while my sight has started to degrade from age.
Yeah, I'm really asking for tons of whitespace and everything to breathe sooooo much by asking for the default font size to be a browser default (16px) and updated to match most modern display resolutions in 2025, not 2006 when it was created.
HN is the only site I have to increase the zoom level, and others below are doing the same thing as me. But it must be us with the issues. Obviously PG knew best in 2006 for decades to come.
On the flipside, HN is the only site I don't have to zoom out of to keep it comfortable. Most sit at 90% with a rare few at 80%.
16px is just massive.
Sounds like your display scaling is a little out of whack?
Don't do this.
I agree, don't set the default font size to ~12px equiv in 2025.
You're obviously being sarcastic, but I don't think that it's a given that "those are old font-size defaults" means "those are bad font-size defaults." I like the default HN size. There's no reason that my preference should override yours, but neither is there any reason that yours should override mine, and I think "that's how the other sites are" intentionally doesn't describe the HN culture, so it need not describe the HN HTML.
Content does need white space.
HN has a good amount of white space. Much more would be too much, much less would be not enough.
12 px (13.333 px when in the adapted layout) is a little small - and that's a perfectly valid argument without trying to argue we should abandon absolute sized fonts in favor of feels.
There is no such thing as a reasonable default size if we stop calibrating to physical dimensions. If you choose to use your phone at a scaling where what is supposed to be 1" is 0.75" then that's on you, not on the website to up the font size for everyone.
I'm sure they accept PRs, although it can be tricky to evaluate the effect a CSS change will have on a broad range of devices.
Really? I find the font very nice on my Pixel XL. It doesn't take too much space unlike all other modern websites.
I find it exactly the right size on both PC and phone.
There's a trend to make fonts bigger but I never understood why. Do people really have trouble reading it?
I prefer seeing more information at the same time, when I used Discord (on PC), I even switched to IRC mode and made the font smaller so that more text would fit.
I'm assuming you have a rather small resolution display? On a 27" 4k display, scaled to 150%, the font is quite tiny, to the point where the textarea I currently type this in (which uses the browsers default font size) is about 3 times the perceivable size in comparison to the HN comments themselves.
Agreed. I'm on an Apple Thunderbolt Display (2560x1440) and I'm also scaled up to 150%.
I'm not asking for some major, crazy redesign. 16px is the browser default and most websites aren't using tiny, small font sizes like 12px any longer.
The only reason HN is using it is because `pg` made it that in 2006, at a time when it was normal and made sense.
Yup, and these days we have relative units in CSS such that we no longer need to hardcode pixels, so everyone wins (em, rem). That way people can get usability according to the browsers defaults, which make the whole thing user configurable.
1920x1080 and 24 inches
Maybe the issue is not scaling according to DPI?
OTOH, people with 30+ inch screens probably sit a bit further away to be able to see everything without moving their head so it makes sense that even sites which take DPI into account use larger fonts because it's not really about how large something is physically on the screen but about the angular size relative to the eye.
Yeah, one of the other cousin comments mentions 36 inches away. I don't think they realize just how far outliers they are. Of course you have to make everything huge when your screen is so much further away than normal.
I have HN zoomed to 150% on my screens that are between 32 and 36 inches from my eyeballs when sitting upright at my desk.
I don't really have to do the same elsewhere, so I think the 12px font might be just a bit too small for modern 4k devices.
I'm low vision and I have to zoom to 175% on HN to read comfortably, this is basically the only site I do to this extreme.
I have mild vision issues and have to blow up the default font size quite a bit to read comfortably. Everyone has different eyes, and vision can change a lot with age.
Even better: it scales nicely with the browser’s zoom setting.
On that subject I would be fine if the browser always rendered in standard mode. or offered a user configuration option to do so.
No need to have the default be compatible with a dead browser.
further thoughts: I just read the mdn quirks page and perhaps I will start shipping Content-Type: application/xhtml+xml as I don't really like putting the doctype in. It is the one screwball tag and requires special casing in my otherwise elegant html output engine.
A uBlock filter can do it: `||news.ycombinator.com/*$replace=/<html/<!DOCTYPE html><html/`
Could also use tampermonkey to do that, also perform the same function as OP.
There is a better option, but generally the answer is "no"; the best solution would be for WHATWG to define document.compatMode to be writable property instead of readonly.
The better option is to create and hold a reference to the old nodes (as easy as `var old = document.documentElement`) and then after blowing everything away with document.write (with an empty* html element; don't serialize the whole tree), re-insert them under the new document.documentElement.
* Note that your approach doesn't preserve the attributes on the html element; you can fix this by either pro-actively removing the child nodes before the document.write call and rely on document.documentElement.outerHTML to serialize the attributes just as in the original, or you can iterate through the old element's attributes and re-set them one-by-one.
I know this was a joke:
but I feel there is a last tag missing: that will ensure screenreaders skip all your page "chrome" and make life much easier for a lot of folks. As a bonus mark any navigation elements inside main using <nav> (or role="navigation").I’m not a blind person but I was curious about once when I tried to make a hyper-optimized website. It seemed like the best way to please screen readers was to have the navigation HTML come last, but style it so it visually comes first (top nav bar on phones, left nav menu on wider screens).
Props to you for taking the time to test with a screen reader, as opposed to simply reading about best practices. Not enough people do this. Each screen reader does things a bit differently, so testing real behavior is important. It's also worth noting that a lot of alternative input/output devices use the same screen reader protocols, so it's not only blind people you are helping, but anyone with a non-traditional setup.
Navigation should come early in document and tab order. Screen readers have shortcuts for quickly jumping around the page and skipping things. It's a normal part of the user experience. Some screen readers and settings de-prioritize navigation elements in favor of reading headings quickly, so if you don't hear the navigation right away, it's not necessarily a bug, and there's a shortcut to get to it. The most important thing to test is whether the screen reader says what you expect it to for dynamic and complex components, such as buttons and forms, e.g. does it communicate progress, errors, and success? It's usually pretty easy to implement, but this is where many apps mess up.
Wouldn’t that run afoul of other rules like keeping visual order and tab order the same? Screen reader users are used to skip links & other standard navigation techniques.
Just to say, that makes your site more usable in text browsers too, and easier to interact with the keyboard.
I remember HTML has an way to create global shortcuts inside a page, so you press a key combination and the cursor moves directly to a pre-defined place. If I remember that right, it's recommended to add some of those pointing to the menu, the main content, and whatever other relevant area you have.
You want a hidden "jump to content" link as the first element available to tab to.
>I know this was a joke
I'm…missing the joke – could someone explain, please? Thank you.
Not a front end engineer but I imagine this boilerplate allows the JavaScript display engine of choice to be loaded and then rendered into that DIV rather than having any content on the page itself.
It's because "modern" web developers are not writing web pages in standard html, css or js. Instead, they use javascript to render the entire thing inside a root element.
This is now "standard" but breaks any browser that doesn't (or can't) support javascript. It's also a nightmare for SEO, accessibility and many other things (like your memory, cpu and battery usage).
But hey, it's "modern"!
TFA itself has an incorrect DOCTYPE. It’s missing the whitespace between "DOCTYPE" and "html". Also, all spaces between HTML attributes where removed, although the HTML spec says: "If an attribute using the double-quoted attribute syntax is to be followed by another attribute, then there must be ASCII whitespace separating the two." (https://html.spec.whatwg.org/multipage/syntax.html#attribute...) I guess the browser gets it anyway. This was probably automatically done by an HTML minifier. Actually the minifier could have generated less bytes by using the unquoted attribute value syntax (`lang=en-us id=top` rather than `lang="en-us"id="top"`).
Edit: In the `minify-html` Rust crate you can specify "enable_possibly_noncompliant", which leads to such things. They are exploiting the fact that HTML parsers have to accept this per the (parsing) spec even though it's not valid HTML according to the (authoring) spec.
Maybe a dumb question but I have always wondered, why does the (authoring?) spec not consider e.g. "doctypehtml" as valid HTML if compliant parsers have to support it anyway? Why allow this situation where non-compliant HTML is guaranteed to work anyway on a compliant parser?
It's considered a parse error [0]: it basically says that a parser may reject the document entirely if it occurs, but if it accepts the document, then it must act as if a space is present. In practice, browsers want to ignore all parse errors and accept as many documents as possible.
[0] https://html.spec.whatwg.org/multipage/parsing.html#parse-er...
Because there are multiple doctypes you can use. The same reason "varx" is not valid and must be written "var x".
I'm not a web developer, so if someone can please enlighten me: Why does this site, and so many "modern" sites like it have it so that the actual content of the site takes up only 20% of my screen?
My browser window is 2560x1487. 80% of the screen is blank. I have to zoom in 170% to read the content. With older blogs, I don't have this issue, it just works. Is it on purpose or it is it bad css? Given the title of the post, i think this is somewhat relevant.
You'll notice newspapers use columns and do not extend the text all the way left to right either. It's a typographical consideration, for both function and style.
From a functional standpoint: Having to scan your eyes left to right a far distance to read makes it more uncomfortable. Of course, you could debate this and I'm sure there are user preferences, but this is the idea behind limiting the content width.
From a stylistic standpoint: It just looks bad if text goes all the way from the left to right because the paragraph looks "too thin" like "not enough weight" and "too much whitespace." I can't really explain this any further: I think it looks bad and a lot of people think it looks bad. Like picking colors combinations, the deciding factor isn't any rule: it's just "does it look ugly?" and then increase or decrease the width.
In the case of the site in question, the content width is really small. However, if you notice, each paragraph has very few words so it may have been tightened up for style reasons. I would have made the same choice.
That said, if you have to zoom in 170% to read the content and everything else is not also tiny on your screen, it may be bad CSS.
Often times that is to create a comfortable reading width. (https://ux.stackexchange.com/questions/108801/what-is-the-be...)
Probably to not have incredibly wide paragraphs. I will say though, I set my browser to always display HN at like 150% zoom or something like that. They definitely could make the default font size larger. On mobile it looks fine though.
I have HN on 170% zoom too. this a bad design pattern. I shouldn't have to zoom in on every site. Either increasing the font or making sure the content is always at least 50% of the page would be great for me.
I'm not sure, but when I was working with UX years ago, they designed everything for a fixed width and centered it in the screen.
Kinda like how HackerNews is, it's centered and doesn't scale to my full width of the monitor.
I understand not using the full width, but unless you zoom in, it feels like I'm viewing tiny text on a smart phone in portrait mode.
You would think browsers themselves would handle the rest, if the website simply specified "center the content div with 60% width" or something like that.
Anyone else prefer to use web components without bundling?
I probably should not admit this, but I have been using Lit Elements with raw JavaScript code. Because I stopped using autocomplete awhile ago.
I guess not using TypeScript at this point is basically the equivalent for many people these days of saying that I use punch cards.
37 Signals [0] famously uses their own Stimulus [1] framework on most of their products. Their CEO is a proponent of the whole no-build approach because of the additional complexity it adds, and because it makes it difficult for people to pop your code and learn from it.
[0]: https://basecamp.com/ [1]: https://stimulus.hotwired.dev/
It's impossible to look at a Stimulus based site (or any similar SSR/hypermedia app) and learn anything useful beyond superficial web design from them because all of the meaningful work is being done on the other side of the network calls. Seeing a "data-action" or a "hx-swap" in the author's original text doesn't really help anyone learn anything without server code in hand. That basically means the point is moot because if it's an internal team member or open source wanting to learn from it, the original source vs. minified source would also be available.
It's also more complex to do JS builds in Ruby when Ruby isn't up to the task of doing builds performantly and the only good option is calling out to other binaries. That can also be viewed from the outside as "we painted ourselves into a corner, and now we will discuss the virtues of standing in corners". Compared to Bun, this feels like a dated perspective.
DHH has had a lot of opinions, he's not wrong on many things but he's also not universally right for all scenarios either and the world moved past him back in like 2010.
Dunno. You can build without minifying if you want it to be (mostly) readable. I wouldn’t want to give up static typing again in my career.
Even with TS, if I’m doing web components rather than a full framework I prefer not bundling. That way I can have each page load the exact components it needs. And with http/2 I’m happy to have each file separate. Just hash them and set an immutable cache header so it even when I make changes the users only have to pull the new version of things that actually changed.
> Anyone else prefer to use web components without bundling?
Yes! not only that but without ShadowDOM as well.
Can't say I generally agree with dropping TS for JS but I suppose it's easier to argue when you are working on smaller projects. But here is someone that agrees with you with less qualification than that https://world.hey.com/dhh/turbo-8-is-dropping-typescript-701...
I was introduced to this decision from the Lex Fridman/DHH podcast. He talked a lot about typescript making meta programming very hard. I can see how that would be the case but I don't really understand what sort of meta programming you can do with JS. The general dynamic-ness of it I get.
Luckily Lit supports typescript so you wouldn't need to drop it.
God yes, as little tool chain as I can get away with.
I often reach for the HTML5 boilerplate for things like this:
https://github.com/h5bp/html5-boilerplate/blob/main/dist/ind...
There is some irony in then-Facebook's proprietary metadata lines being in there (the "og:..." lines). Now with their name being "Meta", it looks even more proprietary than before.
Maybe the name was never about the Metaverse at all...
Are they proprietary? How? Isn't open graph a standard and widely implemented by many parties, including many open source softwares?
They're not, at all. It was invented by Facebook, but it's literally just a few lines of metadata that applications can choose to read if they want.
Being invented by $company does not preclude it from being a standard.
https://en.wikipedia.org/wiki/Technical_standard
> A technical standard may be developed privately or unilaterally, for example by a corporation, regulatory body, military, etc.
PDF is now an international standard (ISO 32000) but it was invented by Adobe. HTML was invented at the CERN and is now controlled by W3C (a private consortium). OpenGL was created by SGI and is maintained by the Khronos Group.
All had different "ownership" paths and yet I'd say all of them are standards.
Did you mean to type "does not" in that first sentence? Otherwise, the rest of your comment acts as evidence against it.
Yep, it was a typo. Thanks! Fixed.
how do you find this when you need it?
Note that <html> and <body> auto-close and don't need to be terminated.
Also, wrapping the <head> tags in an actual <head></head> is optional.
You also don't need the quotes as long the attribute doesn't have spaces or the like; <html lang=en> is OK.
(kind of pointless as the average website fetches a bazillion bytes of javascript for every page load nowadays, but sometimes slimming things down as much as possible can be fun and satisfying)
This kind of thing will always just feel shoddy to me. It is not much work to properly close a tag. The number of bytes saved is negligible, compared to basically any other aspect of a website. Avoiding not needed div spam already would save more. Or for example making sure CSS is not bloated. And of course avoiding downloading 3MB of JS.
What this achieves is making the syntax more irregular and harder to parse. I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient. It would greatly simplify browser code and HTML spec.
Implicit elements and end tags have been a part of HTML since the very beginning. They introduce zero ambiguity to the language, they’re very widely used, and any parser incapable of handling them violates the spec and would be incapable of handling piles of real‐world strict, standards‐compliant HTML.
> I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient.
They (W3C) tried that with XHTML. It was soundly rejected by webpage authors and by browser vendors. Nobody wants the Yellow Screen of Death. https://en.wikipedia.org/wiki/File:Yellow_screen_of_death.pn...
> They introduce zero ambiguity to the language
Well, to parsing it for machines yes, but for humans writing and reading it they are helpful. For example, if you have
and change it to suddenly you've got a syntax error (or some quirks mode rendering with nested divs).The "redundancy" of closing the tags acts basically like a checksum protecting against the "background radiation" of human editing. And if you're writing raw HTML without an editor that can autocomplete the closing tags then you're doing it wrong anyway. Yes that used to be common before and yes it's a useful backwards compatibility / newbie friendly feature for the language, but that doesn't mean you should use it if you know what you're doing.
It sounds like you're headed towards XHTML. The rise and fall of XHTML is well documented and you can binge the whole thing if you're so inclined.
But my summarization is that the reason it doesn't work is that strict document specs are too strict for humans. And at a time when there was legitimate browser competition, the one that made a "best effort" to render invalid content was the winner.
The merits and drawbacks of XHTML has already been discussed elsewhere in the thread and I am well aware of it.
> And at a time when there was legitimate browser competition, the one that made a "best effort" to render invalid content was the winner.
Yes, my point is that there is no reason to still write "invalid" code just because it's supported for backwards compatibility reasons. It sounds like you ignored 90% of my comment, or perhaps you replied to the wrong guy?
I'm a stickling pedant for HTML validity, but close tags on <p> and <li> are optional by spec. Close tags for <br>, <img>, and <hr> are prohibited. XML-like self-closing trailing slashes explicitly have no meaning in XML.
Close tags for <script> are required. But if people start treating it like XML, they write <script src="…" />. But that fails, because the script element requires closure, and that slash has no meaning in XML.
I think validity matters, but you have to measure validity according to the actual spec, not what you wish it was, or should have been. There's no substitute for actually knowing the real rules.
Are you misunderstanding on purpose? I am aware they are optional. I am arguing that there is no reason to omit them from your HTML. Whitespace is (mostly) optional in C, does that mean it's a good idea to omit it from your programs? Of course a br tag needs no closing tag because there is no content inside it. How exactly is that an argument for omitting the closing p tag? The XML standard has no relevance to the current discussion because I'm not arguing for "starting to treat it like XML".
I'm beginning to think I'm misunderstanding, but it's not on purpose.
Including closing tags as a general rule might make readers think that they can rely on their presence. Also, in some cases they are prohibited. So you can't achieve a simple evenly applied rule anyway.
I didn't have a problem with XHTML back in the day; it tool a while to unlearn it; I would instinctively close those tags: <br/>, etc.
It actually the XHTML 2.0 specification [1] that discarded backwards compatibility with HTML 4 was the straw that broke the camel's back. No more forms as we knew them, for example; we were supposed to use XFORMS.
That's when WHATWG was formed and broke with the W3C and created HTML5.
Thank goodness.
[1]: https://en.wikipedia.org/wiki/XHTML#XHTML_2.0
> I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient.
Who would want to use a browser which would prevent many currently valid pages from being shown?
I mean, I am obviously talking about a fictive scenario, a somewhat better timeline/universe. In such a scenario, the shoddy practices of not properly closing tags and leaning on leniency in browser parsing and sophisticated fallbacks and all that would not have become a practice and those many currently valid websites would mostly not have been created, because as someone tried to create them, the browsers would have told them no. Then those people would revise their code, and end up with clean, easier to parse code/documents, and we wouldn't have all these edge and special cases in our standards.
Also obviously that's unfortunately not the case today in our real world. Doesn't mean I cannot wish things were different.
> It would greatly simplify browser code and HTML spec.
I doubt it would make a dent - e.g. in the "skipping <head>" case, you'd be replacing the error recovery mechanism of "jump to the next insertion mode" with "display an error", but a) you'd still need the code path to handle it, b) now you're in the business of producing good error messages which is notoriously difficult.
Something that would actually make the parser a lot simpler is removing document.write, which has been obsolete ever since the introduction of the DOM and whose main remaining real world use-case seems to be ad delivery. (If it's not clear why this would help, consider that document.write can write scripts that call document.write, etc.)
You're not alone, this is called XHTML and it was tried but not enough people wanted to use it
oh man, I wish XHTML had won the war. But so many people (and CMSes) were creating dodgy markup that simply rendered yellow screens of doom, that no-one wanted it :(
i'm glad it never caught on. the case sensitivity (especially for css), having to remember the xmlns namespace URI in the root element, CDATA sections for inline scripts, and insane ideas from companies about extending it further with more xml namespaced elements... it was madness.
It had too much unnecessary metadata yes, but case insensitivity is always the wrong way to do stuff in programming (e.g. case insensitive file system paths). The only reason you'd want it is for real-world stuff like person names and addresses etc. There's no reason you'd mix the case of your CSS classes anyway, and if you want that, why not also automatically match camelCase with snake_case with kebab-case?
I'll copy what I wrote a few days ago:
The fact XHTML didn't gain traction is a mistake we've been paying off for decades.
Browser engines could've been simpler; web development tools could've been more robust and powerful much earlier; we would be able to rely on XSLT and invent other ways of processing and consuming web content; we would have proper XHTML modules, instead of the half-baked Web Components we have today. Etc.
Instead, we got standards built on poorly specified conventions, and we still have to rely on 3rd-party frameworks to build anything beyond a toy web site.
Stricter web documents wouldn't have fixed all our problems, but they would have certainly made a big impact for the better.
And add:
Yes, there were some initial usability quirks, but those could've been ironed out over time. Trading the potential of a strict markup standard for what we have today was a colossal mistake.
There's no way it could have gained traction. Consider two browsers. One follows the spec explicitly, and one goes into "best-effort" mode on encountering invalid markup. End users aren't going to care about the philosophical reasoning for why Browser A doesn't show them their school dance recital schedule.
Consider JSON and CSV. Both have formal specs. But in the wild, most parsers are more lenient than the spec.
Yeah this is it. We can debate what would be nicer theoretically until the cows come home but there's a kind of real world game theory that leads to browsers doing their best to parse all kinds of slop as well as they can, and then subsequently removing the incentive for developers and tooling to produce byte perfect output
Yeah, I remember, when I was at school and first learning HTML and this kind of stuff. When I stumbled upon XHTML, I right away adapted my approach to verify my page as valid XHTML. Guess I was always on this side of things. Maybe machine empathy? Or also human empathy, because someone needs to write those parsers and the logic to process this stuff.
I agree for sure, but that's a problem with the spec, not the website. If there are multiple ways of doing something you might as well do the minimal one. The parser will have always to be able to handle all the edge cases no matter what anyway.
You might want always consistently terminate all tags and such for aesthetic or human-centered (reduced cognitive load, easier scanning) reasons though, I'd accept that.
<html>, <head> and <body> start and end tags are all optional. In practice, you shouldn’t omit the <html> start tag because of the lang attribute, but the others never need any attributes. (If you’re putting attributes or classes on the body element, consider whether the html element is more appropriate.) It’s a long time since I wrote <head>, </head>, <body>, </body> or </html>.
Not only do html and body auto-close, their tags including start-element tags can be omitted alltogether:
(cf explainer slides at [1] for the exact tag inferences SGML/HTML does to arrive at the fully tagged doc)[1]: https://sgmljs.sgml.net/docs/html5-dtd-slides-wrapper.html (linked from https://sgmljs.sgml.net/blog/blog1701.html)
I'm not sure I'd call keeping the <body> tag open satisfying but it is a fun fact.
Didn't know you can omit <head> .. </head> but I prefer for clarify to keep them.
Do you also spell out the implicit <tbody> in all your tables for clarity?
I do.
`<thead>` and `<tfoot>`, too, if they're needed. I try to use all the free stuff that HTML gives you without needing to reach for JS. It's a surprising amount. Coupled with CSS and you can get pretty far without needing anything. Even just having `<template>` with minimal JS enables a ton of 'interactivity'.
Yes. Explicit is almost always better than implicit, in my experience.
> `<html lang="en">`
The author might consider instead:
`<html lang="en-US">`
It's time for an "en-INTL" (or similar) for international english, that is mostly "en-US", but implies a US-International keyboard and removes americanisms, like Logical Punctuation in quotes [1]. Then AI can start writing for a wider and much larger public (and can also default to regular ISO units instead of imperial baby food).
Additionally, it's kind of crazy we are not able to write any language with any keyboard, as nowadays we just don't know the idiom the person who sits behind the keyboard needs.
[1] https://slate.com/human-interest/2011/05/logical-punctuation...
Interesting.
From what I can tell this allows some screen readers to select specific accents. Also the browser can select the appropriate spell checker (US English vs British English).
For clarity and conformity, while optional these days, I insist on placing meta information within <head>.
I appreciate this post! I was hoping you would add an inline CSS style sheet to take care of the broken defaults. I only remember one off the top of my head, the rule for monospace font size. You need something like:
But I vaguely remember there are other broken CSS defaults for links, img tags, and other stuff. An HTML 5 boilerplate guide should include that too, but I don't know of any that do.> <html lange="en">
s/lange/lang/
> <meta name="viewport" content="width=device-width,initial-scale=1.0">
Don’t need the “.0”. In fact, the atrocious incomplete spec of this stuff <https://www.w3.org/TR/css-viewport-1/> specifies using strtod to parse the number, which is locale dependent, so in theory on a locale that uses a different decimal separator (e.g. French), the “.0” will be ignored.
I have yet to test whether <meta name="viewport" content="width=device-width,initial-scale=1.5"> misbehaves (parsing as 1 instead of 1½) with LC_NUMERIC=fr_FR.UTF-8 on any user agents.
Wow. This reminds me of Google Sheets formulas, where function parameters are separated with , or ; depending on locale.
Not to mention the functions are also translated to the other language. I think both these are the fault of Excel to be honest. I had this problem long before Google came around.
And it's really irritating when you have the computer read something out to you that contains numbers. 53.1 km reads like you expect but 53,1 km becomes "fifty-three (long pause) one kilometer".
> Not to mention the functions are also translated to the other language.
This makes a lot of sense when you recognize that Excel formulas, unlike proper programming languages, aren't necessarily written by people with a sufficient grasp of the English language, especially when it comes to more abstract mathematical concepts, which aren't taught in secondary English language classes at school, but it in their native language mathematics classes.
Not sure if this still is the case, but Excel used to fail to open CSV files correctly if the locale used another list separator than ',' – for example ';'.
I’m happy to report it still fails and causes me great pain.
Reall? Libreoffice at least has a File > Open menu that allows you to specify the separator and other CSV stuff, like the quote character
You have to be inside Excel and use the data import tools. You cannot double click to open, it outs everything in one cell…
Sometimes you double click and it opens everything just fine and silently corrupts and changes and drops data without warning or notification and gives you no way to prevent it.
The day I found that Intellij has a built in CSV tabular editor and viewer was the best day.
Excel has that too. But you can't just double-click a CSV file to open it.
Given that world is about evenly split on the decimal separator [0] (and correspondingly on the thousands grouping separator), it’s hard to avoid. You could standardize on “;” as the argument separator, but “1,000” would still remain ambiguous.
[0] https://en.wikipedia.org/wiki/Decimal_separator#Conventions_...
Try Apple Numbers, where even function names are translated and you can’t copy & paste without an error if your locale is, say, German.
aha, in Microsoft Excel they translate even the shortcuts. The Brazilian version Ctrl-s is "Underline" instead of "Save". Every sheet of mine ends with a lot of underlined cells :-)
The behaviour predates Google Sheets and likely comes from Excel (whose behavior Sheets emulate/reverse engineer in many places). And I wouldn't be surprised if Excel got it from Lotus.
Same as Excel and LibreOffice surely?
Yes
Oh, good to know that it depends on locale, I always wondered about that behavior!
Quirks quirks aside there are other ways to tame old markup...
If a site won't update itself you can... use a user stylesheet or extension to fix things like font sizes and colors without waiting for the maintainer...
BUT for scripts that rely on CSS behaviors there is a simple check... test document.compatMode and bail when it's not what you expect... sometimes adding a wrapper element and extracting the contents with a Range keeps the page intact...
ALSO adding semantic elements and ARIA roles goes a long way for accessibility... it costs little and helps screen readers navigate...
Would love to see more community hacks that improve usability without rewriting the whole thing...
I wish I could use this one day again to make my HTML work as expected.
<bgsound src="test.mid" loop=3>
I hate how because of iPhone and subsequent mobile phones we have bad defaults for webpages so we're stuck with that viewport meta forever.
If only we had UTF-8 as a default encoding in HTML5 specs too.
I came here to say the same regarding UTF-8. What a huge miss and long overdue.
I’ve had my default encoding set to UTF-8 for probably 20 years at this point, so I often miss some encoding bugs, but then hit others.
> <!doctype html> is what you want for consistent rendering. Or <!DOCTYPE HTML> if you prefer writing markup like it’s 1998. Or even <!doCTypE HTml> if you eschew all societal norms. It’s case-insensitive so they’ll all work.
And <!DOCTYPE html> if you want polyglot (X)HTML.
I tend to lower-case all my HTML because it has less entropy and therefore can be compressed more effectively.
But in case of modern compression algorithms, some of them come with a pre-defined dictionary for websites. These usually contain the common stuff like <!DOCTYPE html> in its most used form. So doing it like everybody else might even make the compression even more effective.
We need HTML Sophisticated - <!Dr. Type, HtML, PhD>
I still don’t understand what people think they’re accomplishing with the lang attribute. It’s trivial to determine the language, and in the cases where it isn’t, it’s not trivial for the reader, either.
Doesn't it state this in the article?
> Browsers, search engines, assistive technologies, etc. can leverage it to:
> - Get pronunciation and voice right for screen readers
> - Improve indexing and translation accuracy
> - Apply locale-specific tools (e.g. spell-checking)
It states the cargo culted reasons, but not the actual truth.
1) Pronounciation is either solved by a) automatic language detection, or b) doesn't matter. If I am reading a book, and I see text in a language I recognize, I will pronounce it correctly, just like the screen reader will. If I see text in a language I don't recognize, I won't pronounce it correctly, and neither will the screen reader. There's no benefit to my screen reader pronouncing Hungarian correctly to me, a person who doesn't speak Hungarian. On the off chance that the screen reader gets it wrong, even though I do speak Hungarian, I can certainly tell that I'm hearing english-pronounced hungarian. But there's no reason that the screen reader will get it wrong, because "Mit csináljunk, hogy boldogok legyünk?" isn't ambiguous. It's just simply Hungarian, and if I have a Hungarian screen reader installed, it's trivial to figure that out.
2) Again, if you can translate it, you already know what language it is in. If you don't know what language it is in, then you can't read it from a book, either.
3) See above. Locale is mildly useful, but the example linked in the article was strictly language, and spell checking will either a) fail, in the case of en-US/en-UK, or b) be obvious, in the case of 1) above.
The lang attribute adds nothing to the process.
Your whole comment assumes language identification is both trivial and fail-safe. It is neither and it can get worse if you consider e.g. cases where the page has different elements in different languages, different languages that are similar.
Even if language identification was very simple, you're still putting the burden on the user's tools to identify something the writer already knew.
outerHTML is an attribute of Element and DocumentFragment is not an Element.
Where do the standards say it ought to work?
Nice, the basics again, very good to see. But then:
I know what you’re thinking, I forgot the most important snippet of them all for writing HTML:
<div id="root"></div> <script src="bundle.js"></script>
Lol.
-> Ok, thanx, now I do feel like I'm missing an inside joke.
It's a typical pattern in, say react, to have just this scaffolding in the HTML and let some frond end framework to build the UI.
The "without meta utf-8" part of course depends on your browser's default encoding.
What mainstream browsers aren't defaulting to utf-8 in 2025?
I spent about half an hour trying to figure out why some JSON in my browser was rendering è incorrectly, despite the output code and downloaded files being seemingly perfect. I came to the conclusion that the browsers (Safari and Chrome) don't use UTF-8 as the default renderer for everything and moved on.
This should be fixed, though.
I wouldn’t be surprised if they don’t for pages loaded from local file URIs.
html5 does not even allow any other values in <meta charset=>. I think you need to use a different doctype to get what the screenshot shows.
While true, they also require user agents to support other encodings specified that way: https://html.spec.whatwg.org/multipage/parsing.html#characte...
Another funny thing here is that they say “but not limited to” (the listed encodings), but then say “must not support other encodings” (than the listed ones).
It says
> the encodings defined in Encoding, including, but not limited to
where "Encoding" refers to https://encoding.spec.whatwg.org (probably that should be a link.) So it just means "the other spec defines at least these, but maybe others too." (e.g. EUC-JP is included in Encoding but not listed in HTML.)
Ah, I understood it to refer to encoding from the preceding section.
All of them, pretty much.
Similar vibes to https://j4e.name/articles/a-minimal-valid-html5-document/
It's 2025, the end of it. Is this really necessary to share?
Yes. Knowledge is not equally distributed.
When sharing this post on his social media accounts, Jim prefixed the link with: 'Sometimes its cathartic to just blog about really basic, (probably?) obvious stuff'
Feels even more important to share honestly. It's unexamined boilerplate at this point.
Every day you can expect 10000 people learning a thing you thought everyone knew: https://xkcd.com/1053/
To quote the alt text: "Saying 'what kind of an idiot doesn't know about the Yellowstone supervolcano' is so much more boring than telling someone about the Yellowstone supervolcano for the first time."
Thanks! I didn't know that one.
I had a teacher who became angry when a question was asked about a subject he felt students should already be knowledgeable about. "YOU ARE IN xTH GRADE AND STILL DON'T KNOW THIS?!" (intentional shouting uppercase). The fact that you learned it yesterday doesn't mean all humans in the world also learned it yesterday. Ask questions, always. Explain, always.
And here I was, thinking everybody already knew XKCD 1053 ...
XKCD 1053 is a way of life. I think about it all the time, and it has made me a better human being.