DragonFly kernel List (threaded) for 2004-03
Re: HEADS UP: Website Overhaul
d@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <404F8974.9060502@xxxxxxxxx> <404fd656$0$181$415eb37d@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <404FE653.8080201@xxxxxxxxx> <404ff8f7$0$182$415eb37d@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=us-ascii; format=flowed
X-Trace: 1079154119 crater_reader.dragonflybsd.org 181 184.108.40.206
Xref: crater_reader.dragonflybsd.org dragonfly.kernel:4292
David Cuthbert wrote:
> Gary Thorpe wrote:
>> Change the font settings for your browsers then. What is certain is
>> that you cannot predict (or even determine reliably and adjust) the
>> client's screen-size/resolution. There is no arguing this.
> So that I have to scroll up/down all the time for each web page and see
> 20 lines per screen? Ugh.
> I'll hold on to those misused tables, thanks, until they fix HTML/web
> browsers so that there's a "format this in a readable width" style
How does a table help this? If the table is as wide as the screen (100%
width), won't you still have this problem?
>>> And preprocessing imposes no server load. I preprocess the page once,
>>> store the html file on the server. Keep the original content and the
>>> script in CVS. Voila!
>> Of course there is server load: storage for scripts and original
>> content, prcoessing to filter this original content each time it
>> changes etc. The server load is not imposed _each_ time the page is
>> accessed, but I never said it did, did I?
> Now you're just being silly.
> The load on the server is under a few CPU second for each change. The
> storage for the script is not much different than the storage for the
> CSS page.
Do you agree that preprocessing is _additional_ server load or not?
> If I backported this to a C-64, then, yeah, it might become an issue.
> I'd also remove all the comments because that makes it run faster. :-)
>> Whoopie! Guess which communities articles are rated as being more
>> valuable in terms of citations?
> Uh... I've never heard of anyone complaining about citations in any
> field. You just look at the last page of the paper and the references
> are there. What else do I need?
> I guess that makes it harder to see "what papers have cited this paper,"
> but I've never had a real need for that. I'll know if a paper is widely
> cited by looking at the authors. <shrug>
No: I mean which papers are more useful to be used as citations (as one
indication of their quality)?
>> Regardless of the answer to the question, its not relevant. How can
>> you tell if LaTex caught on in IEEE or not though?
> Maybe because I've coauthored or edited a number of IEEE articles in
> many different settings? (IBM, Seagate, CMU...)
Then you can only say that about the papers you have coauthored/edited.
Unless you are coauthoring/editing the majority of the hundreds, perhaps
thousands, of papers per year, I would say you cannot know that.
>> And more to the point: which community has the better quality of
>> articles? What do you think is the main factor in this?
> Wow, I've apparently struck a nerve.
No its a legitimate question: if LaTeX is in fact inferior, then which
groups produces better papers, assuming one group favours LaTeX? Its an
> I find IEEE articles tremendously more useful because, being an EE,
> I think of a lot of CS as fluff. It's all about the content.
The question is not necessarily about the content (as in a web page) but
how overall quality of production. Content would be an important
> But apparently you've never used INSPEC. It has all of the keywords,
> abstracts, authors, titles, affilications, and a host of other stuff
> for IEEE, ACM, APS, SPIE, and random other acronymed organizations'
Can it give me a raking of the papers' quality and how professionals who
read them grade them?
>> Yes, when I can plunk down several thousand dollars for Framemaker, I
>> am sure I will use it too! So much for free software.
> Actually, it's only ~$700 for a single-user license; more expensive if
> you want to use network-based licensing, though. If you're serious
> about writing articles that look good in print and don't want to fight
> LaTeX, it's a wonderful investment. The latest versions even generate
> SGML and XML (which was previously an expensive add-on).
Thats wonderful. Its also WYSIWYG. I suppose you could say Windows is
for people who don't want to "fight" to configure a Linux or a BSD, but
is that a real indicator of which is better?
> Similarly, if you're serious about publishing a magazine, InDesign or
> QuarkXPress are hard to beat.
So you are a magazine publisher now too? Busy life.
> Not that I don't like free software; heck, that's why I'm here. But the
> best software is the software that gets the job done. If someone wants
> to write a free version of Frame, I'd be there in a heartbeat, eagerly
> testing alphas and submitting bugfixes.
>>> WYSIWYG only "sucks" if you misuse it, i.e., apply explicit formatting
>>> instead of styles.
>> Isn't that what WYSIWYG is???
> No. WYSIWYG refers only to the way interactive editing is handled.
> LyX is also a WYSIWYG editor, but it definitely doesn't force you into
> applying explicit formatting.
I didn't say WYSIWYG forces you to do explicit formatting.
REALISTICALLY, WYSIWYG editors allow and somehow encourage explicit
formatting as this is the only way I and others I have been able to
observe use them.
> Word has handled styles since at least the Office 4.x days. The later
> versions even attempt to nudge you into using styles if you try to do
> a lot of explicit formatting. If all you have is prose, Word actually
> isn't too bad.
Can you customize the styles sufficiently without using explicit
formatting? Can you "logically" structure the document at all, even with
> The main issue with Word is that it doesn't handle figures or text boxes
> well. There's no easy way, for example, to specify that "manuscript
> revision information and author affiliation information should be set in
> a text box whose baseline is on the bottom of the first column on the
> first page of the article." I can create a text box for this, but if I
> append too much text in there, it bleeds into the bottom margin.
> Figures/frames/text boxes are anchored to a specific paragraph, which
> is fine. While I'm editing the document, though, the size of the
> paragraphs wll change and sometimes they'll jump from one page boundary
> to another. When this happens, the associated figure (usually, though
> sometimes nondeterministically) also jumps to that page. Sounds like a
> good idea in theory, but what ends up happening is that two figures will
> often collide. Depending on the text flow settings for the box, the
> figures will either overlay each other, or one might get pushed off into
> a margin or, worse, off the page (and lost forever, or until the text
> reflows again and it decides to appear on the title page).
>>> I'll let emacs' M-x auto-fill-mode fix the word wrap, and with the pre
>>> tags, I'll be done in under 2 minutes.
>> Emacs itslef is bloated: why does a text editor need 10+ MB of memory
>> when running? I suppose it's the golden rule of computing: why use
>> less when you can use more? Really, this is what the trend in web
>> design is about I think.
> Now you're trash talking my religion, buddy... ;-)
> I've tried to wean myself from XEmacs, but I've found myself addicted
> to its c-mode and ease of extensibility (if you know Lisp, which I do).
> If a brace doesn't line up as I would expect with c-mode's auto indent,
> I know that I've made a fence-mismatch error (which is notoriously
> difficult for a compiler to detect).
> Plus I have M-f6 set up to automatically insert the boilerplate --
> complete with filenames and dates filled in -- into my source files.
> C-x v l shows me the log of the edits from version control, so I can
> find the correct coworker to blame instantly for why this file isn't
> compiling. C-x v = tells me how I'm about to commit something that
> will make someone else blame me. All without firing up another tool.
> Yeah, I've tried to go to others -- joe, jed, jove, eclipse, gedit,
> kdevelop, vi, vim, cat > file.c -- but I keep finding myself returning
> to XEmacs.
The point is: emacs uses way to many resources to do what is essentially
a simple task, no matter how popular it is. This is a fact.
>>>> Oh, and trying to control how it renders is pointless as you have
>>>> realized, so why bother trying to in the first place?
>>> I thought gopher lost out to HTML+HTTP?
>> Yes, it did. Why is that relevant? You cannot control the client's
>> rendering of HTML. All that a proper client can say is that it will
>> recognize the content and do _something_ wih it before presenting it
>> to the user.
> My point is that gopher never caught on for a reason; Mosaic, and all
> its pretty formatting, caught everyone's attention.
I guess you should pack up and stop developing free OSes then, because
Microsoft/Mac OS are what people really like using.
> Of course I can't control the client's rendering of HTML, but why should
> I should pander to the lowest common denominator? "telnet www.yahoo.com
> <RET>GET /<RET><RET>" is arguably a valid client, but I'm not going to
> try to render things for it.
Since telnet cannot render any HTML at all, your example is invalid.
Telnet can get the source code, but it isn't a web browser. The lowest
common denominator would be a _real_ text-based browser like Lynx,
Links, or w3m.
> Launching nethack is a standards-compliant way of handling "#pragma" in
> C. It's also not terribly useful.
>>> What on that page could possibly be improved for text-to-speech? Or a
>>> braille reader?
>> It has no logical structure! The program cannot infer anything from
>> the page on its meaning or purpose. Would you write an essay or even
>> an advertisement without any structure?
> Of course it has logical structure -- for a human.
That's the point. As more and more information clogs the internet and
has to compete with porn and spam for attention, automated agents will
be more and more useful in what will essentially become data mining.
Your page cannot be processed by a normal program and AI isn't here yet
(and probably never will be since doing it would require actually
understanding what intelligence is fundamentally).
> I haven't searched the W3C spec, but I doubt there's a tag for "this
> link goes to a page which documents source code and is probably not of
> interest for non-developers." Or one that says "this little bit of
> legalese at the bottom of the page is rendered very small for a reason,
> and a text-to-speech program should just say 'legal info' rather than
> speaking it verbatim."
Interesting, google crawlers manage to find links which are valuable to
certain audiences. And guess what: <pre> probably does't help much it this.
If copyright's are a part of W3C's markup, text-to-speech (TTS) programs
that are well written will be able to present the information
appropriately (this is assuming they are written appropriately, since I
have never been forced to use one).
> XML is nice, but the schemas only tell a program how to parse it.
> Here's a document with an unfamiliar structure and its DTD. Great.
> So I can parse it. If I'm an SQL and XSLT hacker, I might even be
> able to format it into a database and store it somewhere for later
> queries. But if I'm mortal... uh, all I see is XML markup.
I am not a proponent of XML and I agree with these points. Still doesn't
make <pre> the end-all of web design.
> Style sheets? Well, they let me see it in Mozilla. Maybe IE, on a
> lucky day. SOL if you're on Lynx or are sight-impaired. Just like
> with HTML.
I am not a proponent of style sheets either as they just move the
munging about to another file (in the best case) and they still cannot
change the reality that not all browsers, or the same browser on a
different machine will render the same page in exactly the same way.
That's the only reason why WYSIWYG, while tolerable for IEEE articles
(or perhaps even the best solution for articles in general), sucks for
web design. Can you honestly disagree?