Register forum user name Search FAQ

Gammon Forum

Notice: Any messages purporting to come from this site telling you that your password has expired, or that you need to verify your details, confirm your email, resolve issues, making threats, or asking for money, are spam. We do not email users with any such messages. If you have lost your password you can obtain a new one by using the password reset link.

Due to spam on this forum, all posts now need moderator approval.

 Entire forum ➜ SMAUG ➜ SMAUG coding ➜ Network Code - Thoughts, Changes, and Some Test Results

Network Code - Thoughts, Changes, and Some Test Results

It is now over 60 days since the last post. This thread is closed.     Refresh page


Posted by David Haley   USA  (3,881 posts)  Bio
Date Fri 18 Jul 2003 09:28 PM (UTC)

Amended on Fri 18 Jul 2003 09:30 PM (UTC) by David Haley

Message
Hi folks. This might not necessarily be directly SMAUG-related, as it's rather some thought on how MUDs in general handle network output.

To start, a few months ago I was investigating the way SMAUG - at least, the heavily modified SMAUG 1.0 codebase I work on - handles network traffic. By traffic, I mean everything regarding sending and receiving information (i.e. text) between the MUD (server) and players (clients).

I found to my surprise that it seemed fairly inefficient, and certainly against the conception I had of the way the select function works and how it is to be used.

So, I did a little investigation into the main game loop (in the game_loop function.) In pseudo-code, this is what I found:


1) for every controlling descriptor (why are
there two created: port and port+1...?):
- create the descriptor sets (loop through
descriptors and add them - regardless of
whether or not a connection has output pending)
- select with a timeout of 0, so that it
returns immediately, modifying the sets
- accept any new connections on this controlling
descriptor
- idem for next controlling descriptor.

2) now that we have our sets made and rezeroed
and made and rezeroed and made (maybe you see
why this is becoming inefficient), we process
input on them. Nothing too bad there.

3) game logic

4) handle output

(NOTE: maybe 3 and 4 are in the other order-
I don't really remember)

5) sleep the amount of time required until next
clock tick. In other words, sit around doing
nothing until then.


Now, the output handling is where things really didn't seem right. For starters, the game would loop over and over and over and over until ALL pending output (unless it's over 4K in size) was sent to the client. So if there is network lag or some other problem, and we can't send a lot at a time, do we just sit around waiting forever? It seems that that's what it was doing. Furthermore, when you write once, you're supposed to select again, in order to make sure that the socket is still ready for output. And if you output again straight away, you run the risk of breaking something.

The other thing that bothered me is the time wasted while sleeping until it's time for the next clock tick. That just sounded completely wrong.



So that was the "preliminary study". Then I went to action.

First off, I converted everything to C++. Not a really hard task - I just had to rename a few variables here and there (like the mobs' "class" which obviously conflicts with the keyword). Then came the more interesting part.

I wrote from scratch some generic C++ network modules that handle text-based input and output, as well as a manager class that is in charge of (you guessed it :P) managing these sockets. The idea was to remove, as much as possible, the network code from the rest of the code. I'd always found that the network stuff was too intertwined with the rest of the game, and that the two really shouldn't have a lot to do with each other.

So after I had that code, I threw desc_data out the window, and created the class cPlayerConnection, which inherits from the generic cSocketConnection, which is the text handler. The player connection version is what has everything a MUD needs to do that isn't necessarily generic - things such as connection states, input receiver, that sort of stuff. (That's another thing I did. Input is sent to "receivers", instead of handled in an outside game loop.)

(continued)

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #1 on Fri 18 Jul 2003 09:29 PM (UTC)

Amended on Fri 18 Jul 2003 09:31 PM (UTC) by David Haley

Message
After all my network code was complete, I modified game_loop to the following:

            while (GameRunning)
            {
                long currentTime;

                currentTime = GetMillisecondsTime();

                struct timeval delayTime;
                long timeDifference = nextActionTime - currentTime;

                if (timeDifference > 0)
                {
                        delayTime.tv_sec = 0;
                        delayTime.tv_usec = 0;
                        while (timeDifference >= 1000)
                        {
                                delayTime.tv_sec += 1;
                                timeDifference -= 1000;
                        }
                        delayTime.tv_usec = timeDifference * 1000;
                }
                else
                {
                        delayTime.tv_sec = 0;
                        delayTime.tv_usec = 0;
                }

                // Only process sockets if the poll succeeded.
                if ( ConnectionManager->PollSockets( delayTime ) == false )
                {
                        TheWorld->LogBugString("There was an error polling the sockets!");
                }
                else
                {
                        if ( ConnectionManager->ProcessActiveSockets() == false )
                                TheWorld->LogBugString("There was an error processing the selected sockets!");
                }

                // need to handle waiting input lines here

                currentTime = GetMillisecondsTime();

                while ( currentTime >= nextActionTime )
                {
                        /*
                         * Run the game logic.
                         */
                        //printf("Pulse\n\r");
                        TheWorld->TimeUnit();

                        // Update next tick time.
                        nextActionTime += FRAME_TIME;

                        // Update current time.
                        currentTime = GetMillisecondsTime();
                        current_time = currentTime;
                }
            }


Alright, it's not perfect yet. I'm getting there. :) But first off, let me explain what I did.

Instead of polling connections once, then processing, then game logic, then sleep until next time, I do the following:


1) check how much time is left between now
and the next scheduled game_logic. This is delayTime.

2) poll sockets:
- construct FD set (once), select with delayTime.
- if any socket has input/output waiting, then
the poll sockets will immediately return control
to game_loop. Otherwise, it patiently waits until
the time expires.

3) process sockets:
- retrieve input if there is any
- send output if there is any
- if there was input, process it

4) now that we're done with network stuff, check
if it's time to run the game logic. If so, do so,
and schedule the next time. If not, start all over
again.


So now, the major difference is that the sockets are checked continuously for input and output, and the game logic is run when it's time... and if it's not time we don't just sit there waiting, we check the sockets again.

The other important difference is that when a socket tries to write its output, it won't sit there until it's all gone. It'll write what it can - the send function returns how much actually got sent - and then it'll say oh well, wait till next time.



The code is fully implemented and works. Every 2 or 3 hours there is a crash, and the source is a problem in the network code. I'm working on fixing it. But, in any case, the results are extremely positive.

The positive side is that large amounts of text whizz by. Typing hlist used to make you sit there for a long while watching the text appear every 250 ms (due to the fact that if the text was >4k, it'd only send it in small increments - every clock tick, remember the sleep), not to mention that it would slow everyone else down while the game pounds on your socket trying to send the block. Now, bandwidth is distributed much more fairly, and as a result when one person is receiving lots of text, everyone else can continue normally.

The downside to this "send-what-you-can" attitude is that there are very slight slowdowns when receiving blocks of text over the "allowable" amount (controlled by the OS - or perhaps the TCP/IP protocol - I really don't know). So for example, if you enter a room, you'll get roughly 80% of the description, and a split second later (or more if you're unlucky and have to wait for the game logic before the next socket polling) you get the rest of the description and your prompt. If the description is shorter, you might get all of it, and then all but two or three characters of your prompt. I've never actually bothered counting the amount of bytes sent, but it seems fairly regular.


In any case, all of this is just the presentation of some of the research I've done on the SMAUG code base, and the solutions I've applied to it. I haven't timed anything, but the speed is obviously much better. If someone wants to know, I can always test it out.

Sorry for the sloppiness of this little report. To be honest, I didn't feel like writing up a whole formal document... I figured that this would be enough to describe what I've done. :)

Nick: this is the work I mentioned a few posts back in the "Automatic shutdown" concerning what I was doing with SMAUG. Once I get this stable, I'm going to start work on fixing up the file system - and once that's done, I'll write up another little report on it. :)

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Nick Gammon   Australia  (23,158 posts)  Bio   Forum Administrator
Date Reply #2 on Fri 18 Jul 2003 11:03 PM (UTC)
Message
Sounds like you and I are working on similar things. :)

Quote:

For starters, the game would loop over and over and over and over until ALL pending output (unless it's over 4K in size) was sent to the client.


This seems wrong, and indeed may explain the problem encountered in this post:


http://www.gammon.com.au/forum/bbshowpost.php?bbsubject_id=2880


Quote:

The other thing that bothered me is the time wasted while sleeping until it's time for the next clock tick. That just sounded completely wrong.


You are right - that is what "select" is supposed to do.

In fact, standard SMAUG is wackier than that, it does multiple selects, eg.


    accept_new( control  );
    accept_new( control2 );
    accept_new( conclient);
    accept_new( conjava  );


Each of the above does a select - seems strange.

Then it does another select to send the output.

Quote:

4) now that we're done with network stuff, check
if it's time to run the game logic. If so, do so,
and schedule the next time. If not, start all over
again.


The only problem I see with what you are doing is that the game logic runs after a minimum time, but with no maximum. Say you want to do things every 2 seconds, but it happens to take 3 seconds to process all input and output. Now the game logic has blown out to a second late.

What I was thinking of doing was this:



  1. Work out which sockets needed to be in the select and do a FD_SET.

  2. Do the select with a timeout of a fixed amount (say, 1/4 of a second).

  3. Loop through all sockets we chose in step (1) above.


    • Process input/output/connection for that socket
    • See if it is time to do a game action (eg. fight round)
    • Do the next socket





This makes the game logic happen more accurately when it should.

The next thing I thought of was that, rather than having a "game logic" loop that checked everything every couple of seconds, that it would be quicker and more natural to have an event queue. That way, if there was nothing due to happen, the event queue would be empty, and the main loop could just keep going.

I thought a STL priority_queue might be the ticket. The priority level would be the event time, thus multiple events due at the same time would be queued in the correct order (eg. fight round in 1 second, mob moves in 5 seconds, spell wears off in 30 seconds).

For instance, if a spell is cast that wears off in 30 seconds, you just add an event to the event queue, in pseudocode:

event_queue.push_back (new event (30, cancel_spell, player));

Then the main loop just has to do a event_queue.front to find the first item (which will therefore be the next due). If it isn't due yet, it just leaves it in the queue, otherwise it does a pop_front.

BTW - how do you do the GetMillisecondsTime function? I have one for Windows, and remember there was something like that for Unix, but can't remember its name.


- Nick Gammon

www.gammon.com.au, www.mushclient.com
Top

Posted by Nick Gammon   Australia  (23,158 posts)  Bio   Forum Administrator
Date Reply #3 on Sat 19 Jul 2003 02:55 AM (UTC)

Amended on Sat 19 Jul 2003 02:57 AM (UTC) by Nick Gammon

Message
I have been experimenting with the idea of using priority_queue, and it seems to be working pretty well.

I made a class CEvent, which is pretty simple at present, but is basically something that you can put into a priority_queue. It currently outputs its message to cout, however you could obviously make it more complex ...


class CEvent
  {

  public:

  CEvent (const int iSecs, const string sMsg) 
    : m_iWhen (iSecs + time (NULL)), m_sMsg (sMsg) {};

  // copy constructor 
  CEvent (const CEvent & rhs) 
    : m_iWhen (rhs.m_iWhen), m_sMsg (rhs.m_sMsg) { };

  // operator=  (assign one event to another)
  const CEvent & operator= (const CEvent & rhs)
    {
    if (this != &rhs)
      {
      m_iWhen = rhs.m_iWhen;
      m_sMsg = rhs.m_sMsg;
      }
    return *this;
    };

  // operator< (for sorting)
  bool operator< (const CEvent & rhs) const
    {
    // we compare > because the sooner events have the
    // higher priority
    return  m_iWhen > rhs.m_iWhen;
    };

  void DoIt (void) { cout << m_sMsg << endl; };

  int GetTime (void) const { return m_iWhen; };

  private:

  int m_iWhen;    // when event fires
  string m_sMsg;    // what to say

  };  // end of class CEvent


The copy constructor and operator= are for STL to manipulate it in the queue, and the operator< is for working out which has higher priority. The priority queue actually returns the highest priority "thing" so I reversed the sense of the test, as I wanted the earliest event (the one with the lowest time) to be done first.

Now we create an instance of the priority queue ...


priority_queue<CEvent, deque<CEvent> > m_events;


Then in the server loop I check the event queue head for every socket, as I suggested earlier, so that lengthy processing will not cause "event creep".


  /* loop through all connections */
for ( iter = m_SocketList.begin (); iter != m_SocketList.end (); iter++)
  {

  time_t t = time (NULL);

  // pull out events that need doing
  while (!m_events.empty ())
    {
    CEvent e = m_events.top ();

    if (e.GetTime () > t)
      break;  // not yet

    m_events.pop ();  // remove from queue

    e.DoIt ();  // do the event
  
    }  // end of events loop

  
  CSocket * pSocket = *iter;


  // process a socket here

  } // end of loop



However I think perhaps this is an overkill if all the loop above does is send/receive comms. Provided the actual processing (eg. of player input) is done elsewhere this is probably too much.

What I would probably do next is process player input by breaking it up at newlines and then queue them into another queue (outstanding input queue), and then process that. The test for events could then be moved inside that queue - although then there is a danger that if no player types commands then queued events aren't done. :)

An interesting side-effect of the way SMAUG is implemented is that the player with the lower descriptor number will always be processed first - perhaps it would be fairer to put all player input into a vector, shuffle it, and then pull it out again (ie. give them a random chance of being at the head of the queue).

Here is my test for putting events into the queue ...


server.m_events.push (CEvent (10, "after 10 secs"));
server.m_events.push (CEvent (5, "after 5 secs"));
server.m_events.push (CEvent (5, "another after 5 secs"));
server.m_events.push (CEvent (7, "after 7 secs"));

Output ...

after 5 secs
another after 5 secs
after 7 secs
after 10 secs


This worked as advertised - the messages appeared in the correct order, and after the correct time.

- Nick Gammon

www.gammon.com.au, www.mushclient.com
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #4 on Sat 19 Jul 2003 09:14 AM (UTC)
Message
Let me let all this digest, I'll reply more fully later... but for now, let me reply to the easy things. :)

Quote:

This seems wrong, and indeed may explain the problem encountered in this post:

http://www.gammon.com.au/forum/bbshowpost.php?bbsubject_id=2880


*nod* "Resource temporarily unavailable" sure sounds like what would happen if you try to write to a socket that isn't ready.

Actually, I hadn't noticed the loop-till-all-is-sent as a problem... I assumed that since SMAUG did it, and that if it didn't work it would have lots of problems, that it worked. It was my father (who is also a computer scientist, who happens to be specialized in networking...) who told me it was bogus, and that it should be redone to be "right". And if indeed that other thread's problem is coming from this, then apparently the code didn't work after all. :)


Quote:

BTW - how do you do the GetMillisecondsTime function? I have one for Windows, and remember there was something like that for Unix, but can't remember its name.


It's actually just a little wrapper function I wrote, so that it would be easily portable between Windows and Unix systems:


long GetMillisecondsTime()
{
#ifdef unix
        struct timeval resultTimeval;
        gettimeofday( &resultTimeval, NULL );
        // First off... tv_sec is seconds since the Epoch
        // this is generally Jan 1st 1970.
        // now we don't want to multiply this by 1000, since
        // it might overflow... so first, subtract the seconds
        // from jan-1-1970 to jan-1-2000

        resultTimeval.tv_sec -= 946080000; // 30 years

        // convert to milliseconds
        return resultTimeval.tv_sec * 1000 + resultTimeval.tv_usec / 1000;
#else
        #ifdef WIN32
        // Under Windows, the time is the time since system startup.
        // Knowing Billysoft, this'll never be more than 5 minutes *wink*
        // but seriously, we don't need to worry about overflow here
                return timeGetTime();
        #endif
#endif
}


To solve the game_logic inaccuracy, basically what you're suggesting is to add in more checks to see if it's time? I've done some profiling, and it seems that the processing sockets never takes more than a few milliseconds. Granted, to do it "right", the timing would need to be more accurate... so I guess that you're correct, it probably should be done right. :)

The priority queue certainly sounds interesting. I'm going to have to examine the implications it would have on the MUD and how things would work, but it certainly sounds interesting. I'll reply later on with more on this, once everything settles down and I have the time to properly think it over. All of this is frontier-terrain for me, up till now I've contented myself with simply editing features of the code-base, not actually changing its core... so I need time to digest information. :)


David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Nick Gammon   Australia  (23,158 posts)  Bio   Forum Administrator
Date Reply #5 on Sat 19 Jul 2003 10:29 AM (UTC)
Message
Good. I've been experimenting a bit more and have changed the queue to a queue of pointers, principally to make it easier to have derived classes from CEvent that can all do something different.

With the event time problem, I agree that processing the network events probably won't take too long, but I was thinking if you actually handled player input, and they happened to type something that was CPU or disk intensive.

- Nick Gammon

www.gammon.com.au, www.mushclient.com
Top

Posted by Nick Gammon   Australia  (23,158 posts)  Bio   Forum Administrator
Date Reply #6 on Sat 19 Jul 2003 09:15 PM (UTC)
Message
Quote:

Furthermore, when you write once, you're supposed to select again, in order to make sure that the socket is still ready for output. And if you output again straight away, you run the risk of breaking something.

The other thing that bothered me is the time wasted while sleeping until it's time for the next clock tick. That just sounded completely wrong.


I'm taking a guess here that whoever did that made changes to code that was initially working properly without fully understanding what they were doing. I would surmise that this happened ...


  1. It initially had one "select" statement, and worked as intended.

  2. They wanted to add more connection ports (hence the reference to conclient and conjava) but rather than just adding them to the first select statement decided to copy and paste some code, and thus got multiple select statements (that still did a wait).

  3. Then they noticed that the two (or more) select statements, each with a wait timeout, caused very laggy behaviour, because you would go through the wait time for each one.

  4. They took out the timeout for each one, to remove this lag.

  5. Then they noticed that whilst the lag was gone, the program now used about 99% CPU because it was constantly doing the select statements without a timeout.

  6. They added a "sleep" to slow the program down a bit.



The net effect was that it sort-of worked but implemented the wrong way. The single select statement (with a non-zero timeout) is really the only proper way of doing it.

- Nick Gammon

www.gammon.com.au, www.mushclient.com
Top

Posted by Nick Gammon   Australia  (23,158 posts)  Bio   Forum Administrator
Date Reply #7 on Sat 19 Jul 2003 11:46 PM (UTC)
Message
Quote:

Furthermore, when you write once, you're supposed to select again, in order to make sure that the socket is still ready for output.


Actually, I don't think this is strictly true. The select tells you that you can write *something* however that something might be only one byte.

On the other hand, you may write 100 bytes, but be able to write another 100.

What I would do is:


1. When the select indicates I can write ...

2. Write some from the output buffer

3. See how many bytes were written

4. If all (and I have more to be written), go to step 2

5. If not all, push the remaining (unwritten) ones back to the start of the output buffer, and exit.



This lets you empty the output buffer quickly - of course you don't crash if you get a EWOULDBLOCK error, because you are expecting it.

- Nick Gammon

www.gammon.com.au, www.mushclient.com
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #8 on Sun 20 Jul 2003 12:37 PM (UTC)
Message
Wow, you never stop, do you. :) Let me try to organize this so I can reply to everything you added:
Quote:

With the event time problem, I agree that processing the network events probably won't take too long, but I was thinking if you actually handled player input, and they happened to type something that was CPU or disk intensive.

You're right. My current setup processes the input after reading in the input. I didn't fall into the trap of "process only if you had input this cycle", but still, it would be better to rearrange it slightly.

So I think the best thing would be to rearrange the main loop slightly, so that we:

  • poll sockets (select)
  • read/write (process sockets in my code)
  • a new sub-loop:

    • do one player's input (look for newlines, commands, etc.)
    • check if it's time for an update, OR, process the event queue (more later)
    • move to next player's input



That should address the accuracy problem. More on the event queue later...

Quote:

I'm taking a guess here that whoever did that made changes to code that was initially working properly without fully understanding what they were doing. I would surmise that this happened ... [...]

*nod* That was my general assessment too. Somebody did something without really understanding how it worked... the classic "hack and slash" technique to programming. Actually, I think that things like that happened all over SMAUG, which is why there are many oddities in many random places, sometimes irrelevant, sometimes very relevant (like this select business.)

Quote:

Actually, I don't think this is strictly true. The select tells you that you can write *something* however that something might be only one byte.

On the other hand, you may write 100 bytes, but be able to write another 100.

What I would do is: [...]


Ah, yes, I hadn't thought of using the E_WOULDBLOCK error to indicate "stop sending data until next select". I'll modify my network code to reflect that (not at home right now), and I'll let you know how it goes.

Quote:

RE: event queue


I like the event queue idea a lot. It seems that while it doesn't directly solve any outstanding and obvious problems, it DOES add immense features to the game... notably the ease of adding events (your example of spell expiration is an excellent usage of such a system.)

One problem is that the class you showed is precise to the second, instead of to the millisecond. That's not necessarily a big deal and is trivial to fix. However, perhaps millisecond precision is overkill, so maybe the best system would be to count it in frame ticks. (This is generally 250ms, I believe.)

I had another idea that would make these event queues more convenient to use for repetitive actions. An event would be flagged as "repeating every x ticks", or optionally "repeating every x ticks, y times". When it is popped, if it's repeating, it's put right back on the queue by the event handler, with the right amount of time reset, unless we've repeated y times already.

This also has all sorts of nifty applications. Imagine a chain-reaction spell that would damage once every 5 seconds. The spell could optionally be passed a parameter, how many times it's been repeated already, so that it handles its damage accordingly.

Anyways, the main application for this was for the update() functions, like mob_update, char_update(), violence_update(), etc., which are scheduled extremely regularly. Such a solution would allow them to be very easily re-scheduled.

If the queue is a queue of pointers, then this sort of thing is even easier, with derived classes for different kinds of events (repeating, repeating x times, or even repeating x times, where the duration in between each repeat increases the more it repeats!)

Actually, I'm all excited about this :) It sounds like a truly excellent idea, and I'm going to implement it as soon as possible.

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Nick Gammon   Australia  (23,158 posts)  Bio   Forum Administrator
Date Reply #9 on Sun 20 Jul 2003 08:58 PM (UTC)
Message
Once again, we are thinking on very similar lines. I have modified the event idea already to do most of what you suggested, I'll post a copy once I have had breakfast. :)


  1. Granularity of milliseconds - whilst this might be overkill, it might make it practical to sandwich commands from players amongst each other (eg. if player A sends "north" 10 times in a single packet, and player B sends "east" in different packets - different clients maybe), then they could still be acted upon in a fairer way.

  2. Auto-repeat of events - with the event class remembering the interval

  3. Event class can change the interval - eg, lengthen, randomise - so the repeated events aren't too predicatable

  4. Made the queue into a queue of pointers so different sub-classes can be easily queued

  5. Resolved a couple of issues with auto-repeat, so that auto-repeated events don't end up back on the head of the queue and get done twice with the potential for an infinite loop.

  6. Events count how many times they have fired, so derived classes can find that out.



That is probably enough for the base event class, and derived classes can, of course, remember all sorts of things (eg. what spell they are, what player they are for, and so on).

- Nick Gammon

www.gammon.com.au, www.mushclient.com
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #10 on Mon 21 Jul 2003 07:43 PM (UTC)
Message
I updated the network code to add in your suggestion concerning select and sockets blocking on write. Here is my code:


bool cSocketConnection::InOutSet()
{
	/*
	 * Short-circuit if nothing to write.
	 */
	if ( OutputLength == 0 )
		return true;

	short bytesWritten = 0;

	while ( OutputLength > 0 )
	{
		// Write as much of buffer as possible, topping out at 512 bytes.
		#ifdef unix
			bytesWritten = write( FileDescriptor, OutputBuffer.c_str(), MIN(OutputLength, 512) );
		#else
			#ifdef WIN32
			bytesWritten = send( FileDescriptor, OutputBuffer.c_str(), MIN(OutputLength, 512), 0 );
			#endif
		#endif

		if (bytesWritten < 0)
		{
			if (errno == EWOULDBLOCK || errno == EAGAIN)
				break; // this is normal, and can happen,
				// so just stop for now

			return false; // Something went wrong!
		}

		if (bytesWritten == 0)
			continue;

		if (bytesWritten < OutputLength)
			OutputBuffer = OutputBuffer.substr( bytesWritten );
		else if (bytesWritten == OutputLength)
			OutputBuffer = "";

		OutputLength -= bytesWritten;
	}

	// All good!
	return true;
}


It seems that it doesn't change anything, on my local server at least. I'm going to try running this on the remote server (where real network lag has an effect) and see what happens.

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #11 on Mon 21 Jul 2003 07:47 PM (UTC)
Message
I want to declare myself the dumbest person ever.


// Write as much of buffer as possible, topping out at 512 bytes.
#ifdef unix
    bytesWritten = write( FileDescriptor, OutputBuffer.c_str(), MIN(OutputLength, 512) );


After removing the MIN check and leaving it at OutputLength, it worked perfectly.

Mental note to self: remember to read your own code carefully.

I must have put that in there because that's what SMAUG did. If the buffer was above 4k in size, it would send bits of 512... and I guess that was because if it was too big it'd lag out due to the loop problem. Now that the network code is smarter... it works just fine...

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Nick Gammon   Australia  (23,158 posts)  Bio   Forum Administrator
Date Reply #12 on Mon 21 Jul 2003 08:55 PM (UTC)
Message
Good idea - let the network subsystem take what it can. Probably another example of a kludge trying to fix something they didn't know why it was happening.

- Nick Gammon

www.gammon.com.au, www.mushclient.com
Top

The dates and times for posts above are shown in Universal Co-ordinated Time (UTC).

To show them in your local time you can join the forum, and then set the 'time correction' field in your profile to the number of hours difference between your location and UTC time.


26,410 views.

It is now over 60 days since the last post. This thread is closed.     Refresh page

Go to topic:           Search the forum


[Go to top] top

Information and images on this site are licensed under the Creative Commons Attribution 3.0 Australia License unless stated otherwise.