Crossfire Mailing List Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: ASCII vs binary - holy war...



 (Raphael Quinet) writes:
>   It won't be easier to parse ASCII data.  You will have to keep a
>   table with all commands and do several strcmp's to find the one
>   that matches. Then you will have to convert all numbers from ASCII
>   to binary.

CPU time will not be a limiting factor for any client, even the  
lowliest PC or Mac.  A machine which can run Ultima 8 which unlike more  
complicated than crossfire can parse a few fixed-format ascii  
characters.   I'll bet you $1 that any client which does graphical  
display will spend less than 5% of CPU time in parsing or unparsing  
protocol lines.  Do you take that bet ?

>   If you send binary data, you can have fixed-length blocks in you
>   packets. Each block begins with the command number on 1 or 2 bytes
>   (very easy to parse, you only need to use a switch statement) and
>   the following data is already in binary form.

No, you don't have fixed length blocks.  Lots of commands will require  
variable length even if you chose binary.

>   No conversion is necessary, unless the byte order is different (and
>   it's easy to handle that case).

Or if the padding is different or if data sizes are different.  Can it  
be done by a staff of experienced network programmers who are willing  
to devote significant amounts of time debugging (not an enjoyable  
experience in any environment) for free just to find an elusive client  
bug ?  Sure.  Do we have such a staff i.e. people who have actually  
designed and implemented network protocols before _and_ are willing to  
debug DOS clients ?  I don't think so.

>   Binary will be much more compact.  Compare the number of bytes
>   needed in each case for a map update, for instance.  In an ASCII
>   packet, we will have the following things: command name (5 to 15
>   bytes), newline (1 byte), first coord (2 to 4 bytes), space (1
>   byte), second coord (2 to 4 bytes), space (1 byte), name of
>   object/picture (5 to 15 bytes), newline (1 byte), ... (repeated if
>   there are several objects), end marker (2 bytes). In a binary
>   packet, we will have: command number (2 bytes), number of objects
>   in the following block (2 bytes), first coord (2 bytes), second
>   coord (2 bytes), dynamic reference number for the object/picture (2
>   bytes), ... (may be repeated), and that's all.  Come on, don't tell
>   me that some compression protocol in CSLIP will compress ASCII 5
>   times better than binary!

CSLIP doesn't do that data compression.  It does Van Jacobsen TCP/IP  
header compression which is a different thing entirely.  What you mean  
are the V.42bis modem data compression modes.  And yes, the kind of  
compression it does (in essence, find repeated patterns and assign  
tokens of various bit length according to frequency of the patterns)  
would recognize just the kind of redundancy which the proposed ASCII  
protocol has and compress it away.  


>   ASCII won't be easier to debug.  Even if you are debugging the
>   protocol layer, you won't enter the data by hand in real time on a
>   telnet connection!

Why not ?  I do that with all kinds of protocols almost daily.  It is  
an almost invalueable tool.

>   So you will need to write a little program that sends/receives the
>   packets and it will be easier if the packets are in binary form,
>   with fixed-length blocks.

No, you won't.  And the blocks (as you implicitly admit in the above  
paragraph) will not be fixed length in a binary protocol either.

>   If we want to have CrossFire on MACs and PCs, the protocol must be
>   designed in such a way that it is:

>   1) easy and fast to parse - binary is better.

Parsing speed is almost insignificant compared to other tasks the  
client has to fulfill.

>   2) compact to save bandwith - binary is better.

Compression makes up for almost all of the difference and may in some  
circumstances even create a net-win for ASCII.

Your argument reminds of the one which says that the primary goals of  
program design is to make it:

	1) Run in as little resources as possible
	2) Make it run as fast as possible

So all programs should always be written in machine language.  When I  
was younger I used to believe that.  


> So, tell me, what's the point in using ASCII?

I've given the long list a number of times.  Please re-read the  
articles.  All that I can add is that I actually went and wrote a  
binary protocol for a game like crossfire and a client and a server for  
it.  My aversion to binary protocols stems from that real world  
experience.

	Carl Edman