I know I’m long winded. Sorry. Skimming it should work pretty well.
In some cases, worse is better, particularly for humans; our languages (I mean regular human ones, like English) are incredibly redundant, arbitrary, and idiomatic, yet this is precisely why they work so well. Computers, on the other hand, do not adapt well to these kinds of ideas.
This is why I feel the semantic web push is a starry-eyed endeavor doomed to — not extinction — but failure. It can’t reach its goals _all the way_. The extreme of trying to reach these goals would be individual tags for verbs, nouns etc. A not-so-far-fetched vision is a sentence tag, rather than using periods for this (not also this only scales to most some languages).
The problem is that by this point we would have become so entrenched… so nit-picky… in the computer’s vision of our ideas that we would lose sight of them, and we still wouldn’t have a true, full representation of our ideas. Confusion and incorrect transmission of information from human to human is a natural part of life and innovation, like parasitic species in an ecosystem. We may not like them, and might try to exterminate them, but find they are necessary, even beneficial.
Some level of semantics is beneficial, but going beyond that (and it’s not a point, it’s a fog) will detriment the ability of our already developed and tuned writing systems to get the point across. Too high of a level of tag soup is also detrimental; especially originally semantic tags converted into presentational markup.
We also need error recovery; a typo in a truly XHTML document (with correct mime type and doctype) will throw a truly conforming user agent into a validation error page. Major publications and books publish typos, or forget a period. Not only these, however are what we have to worry about — they have teams of proofreaders and lots of money to get by with. There are also those smaller players that the web has been good to; you shouldn’t need to understand the technicalities to make a simple web page. Currently, if you don’t understand the technicalities, you write tag soup, because you can’t possibly write a web document the _correct_ way. We need a loss of distinct strict and loose modes. We need a looser strict mode, and a stricter loose mode.
One possible address to this is using authoring tools to automatically produce the required technicalities. However, I feel this is coming in from the wrong angle. This is the angle, as I mentioned in the beginning paragraphs, in which computers do not do well with worse is better; do not do well with redundancy, arbitrariness, and idioms. Instead, we should come in from the angle of humans: the formats themselves (somewhat dead as compared to programming languages) need to reflect our arbitrariness, while the programming of user agents will reflect this back to us, as faithfully as possible.
Note this spiel would have ended with the word ‘possible’ if it were not for this noting of such. This is the whole idea; to do what’s possible.
]]>I’ve translated your blog article with questions about the future of HTML ( http://www.webstandards.org/2006/11/07/have-your-say-about-the-future-of-html/ ) in Russian and placed it in our blog:
It would be great, if you could link it, so the russian speaking community will know about this translation.
http://blog.mister-wong.ru/%d0%be%d1%82-%d0%b2%d0%b0%d1%81-%d0%b7%d0%b0%d0%b2%d0%b8%d1%81%d0%b8%d1%82-%d0%b1%d1%83%d0%b4%d1%83%d1%8e%d1%89%d0%b5%d0%b5-html/2007/03/13/
Thank you very much,
Sergey
I am an advocate of universal web standards. Man it would be nice to be able to code and get the exact same results on every single browser.
Food for thought nothing more:
What if “most” of the internet was all written and very similar? They all pretty much have the same strict tags. – otherwise you get an error. Conformity. What could possibly go wrong with this?
For one it’s a hackers dream. Malicious code could be written in Robots and programs could scower the internet with less coding. Pages could be parsed with ease. Browser hijackers have no problem reading web compliant documents = your transaction is not secure. Everything from “fusebox” applications to the way things are “suppose to be structured” is highly predictable. = grounds for robotic type coding disaster with no hang up or trips. Internet data is becoming more transparent. Simple DOM scripts can parse compliant web pages with ease (already).
The question is:
Are “web standards” and fusebox coding making the internet less secure?
Again – I let the web standards idea. I’d just like to here a pro web developer’s response.
]]>But as I understand html5 and xhtml5 are developed in parallel. So why don`t let the user decide what he wants to use ?
It was hard work for many not so technical freaks to learn html, don`t force them to go to school again.
]]>Gruß,
Markus
englisch>The text speaks me from the soul. But the Web masters will follow only if they see and for the moment see a use the mass no use in following the W3C. Most sides are still built and partially with tables with very bad CSS and not very beautiful HTML /xhtml. It would have to be put by therefore it the Webmastern to more to the heart Standardkomform to build. To me personally a HTML element is missing a translation marks, so which one recognizes that it a translation from a software is and from that that it did not write. So also humans have to contribute the possibility somewhat, if they are not the other language powerful.
]]>