Register forum user name Search FAQ

Gammon Forum

Notice: Any messages purporting to come from this site telling you that your password has expired, or that you need to verify your details, confirm your email, resolve issues, making threats, or asking for money, are spam. We do not email users with any such messages. If you have lost your password you can obtain a new one by using the password reset link.
 Entire forum ➜ MUSHclient ➜ General ➜ MUSHclient variables

MUSHclient variables

It is now over 60 days since the last post. This thread is closed.     Refresh page


Posted by Tsunami   USA  (204 posts)  Bio
Date Tue 01 Aug 2006 06:03 PM (UTC)
Message
I was wondering whether there is any limit on the length of mushclient variables, either hardcoded or practically. I'm using them to store very large arrays (why'd i feel like abbreviating that VLA?). To serialize the tables, I convert them to xml and then use the zLib compression access Lua offers to compress them, and then finally use the base64encode function so they can be stored as a variable. So far, I've had no problem with variables around 5000 characters in length, but as that number gets higher, I was wondering if there would be any problem. Thanks -Tsunami

hrm, VLA's are the low frequency telescopes aren't they...? that's why it sounded familiar, heh
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #1 on Tue 01 Aug 2006 06:18 PM (UTC)
Message
My assumption would be that the variables are dynamically sized, and so could grow as much as you're likely to need them.

There's a pretty easy way to find out, though; you can generate an array with several million random elements in it, and see what happens when you serialize and store it. :-)

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Nick Gammon   Australia  (23,120 posts)  Bio   Forum Administrator
Date Reply #2 on Wed 02 Aug 2006 12:08 AM (UTC)
Message
I trust you are using the multi-line option on Base64Encode, so that you don't have very long lines.

I don't think there is any particular limit to variable contents, however be aware that the XML parser has a hard limit of 1,024,000 bytes after which it will refuse to read the file (the state file or world file, or where-ever you are storing them).

I think you would get away with writing a larger file, it just wouldn't read back in.

Because of that, you should be able to work out your limit. Say the compression gives you 50% benefit (you lose some by Base64Encode) then you might be able to squeeze in around 2 Mb of data, compressed.

- Nick Gammon

www.gammon.com.au, www.mushclient.com
Top

Posted by Tsunami   USA  (204 posts)  Bio
Date Reply #3 on Wed 02 Aug 2006 08:29 PM (UTC)
Message
Hmm, no I wasn't using the multiline option. Why is that necessary. It's not like it's human readable in anycase, so it seems to me that'd only add some newlines, and (albeit minor) bloat.

Also, I'd assume the 1,024,000 byte hard limit applies to the lua xml read function which simply provides access to whatever mushclient uses, correct? Thanks!
Top

Posted by Nick Gammon   Australia  (23,120 posts)  Bio   Forum Administrator
Date Reply #4 on Wed 02 Aug 2006 09:48 PM (UTC)
Message
I was thinking that if you ever needed to edit the file in a word processor, the linebreaks might make it more readable. However it probably doesn't matter.

The 1,024,000 byte limit is there because MUSHclient reads the entire XML document (whether a file or provided in memory via Lua) into memory, so it can be quickly parsed "in situ".

This applies to world files, state files, and the Lua parsing interface as well.

The limit is designed to stop someone accidentally trying to read in a 50 Mb file into memory, possibly causing massive slowdowns or other problems.

I suppose with modern PCs with more memory available the limit could be increased to (say) 5 Mb, however I wonder whether if you need that much then perhaps the problem should be solved a different way?

- Nick Gammon

www.gammon.com.au, www.mushclient.com
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #5 on Wed 02 Aug 2006 10:00 PM (UTC)
Message
I agree that perhaps the problem could be solved differently, but maybe you could have it prompt the user if they try to open a file greater than 1MB? That way if someone really means to, they can, but if they did it by accident it will help them.

For functions, a prompt might not be practical, so there could perhaps be an override flag for a function parameter.

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Nick Gammon   Australia  (23,120 posts)  Bio   Forum Administrator
Date Reply #6 on Wed 02 Aug 2006 11:53 PM (UTC)
Message
I think it would be annoying to have that pop up every time Tsunami's plugin loaded its state file.

I have increased the value in version 3.78 to 5,120,000 bytes. Hopefully that will be enough.

- Nick Gammon

www.gammon.com.au, www.mushclient.com
Top

Posted by Shadowfyr   USA  (1,788 posts)  Bio
Date Reply #7 on Thu 03 Aug 2006 04:34 PM (UTC)
Message
You know.. The default "load everything" method of file handling never made sense to me as far back as DOS, when "edit" refused to load anything over 64k... An add-on utility to replace Command.com, called 4DOS, included a "list" command, to enhance "more", which worked a bit more sanely imho, it loaded a set amount of the file into memory in a buffer, then only loaded new sections as needed. A lot of games, like the old Ultima series, used similar tricks to allow large worlds, by keeping the "currect" part of the world, and the two closest chunks to your location, loaded.

Point being? While coding it might get more complex, its not sane, if you are likely to have large files, but low memory, to try to load the entire thing, instead of buffering the section you will need next, before you need it. The same 1MB limit that existed could be split into two 512k sections, one for the "active" chunk, and the other for the "next" chunk, and still solve the problem, without adding a lot more overhead. A 2, 3, 4, etc. MB file is going to take more time to load anyway. While increasing the load size is bound to help, it is a lazy solution imho and not one that can't/won't cause problems the next time someone comes up with an even bigger file. As scary as it is to contemplate an even bigger one... lol
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #8 on Thu 03 Aug 2006 04:54 PM (UTC)
Message
It's actually a lot faster to make fewer big reads rather than several smaller reads. (This is due in part to how hard drives and disk read caching work. Of course, if the file is fragmented, you won't get nearly as much gain.) Also, programs like 'more' and games have very different requirements than a text editor. In the game, you can't jump all over the world; as you say, it uses adjacency. But in a text editor you are likely to need to jump all over the file quickly. It would be annoying if you had to wait for big disk loads.

Personally I think the sanest approach is to buffer 'as much as possible' (or 'as much as reasonable'), given the memory of the machine, and just live with disk loads if you don't have the choice. Either that or let the user specify where they want to cap file sizes at (if at all).

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Tsunami   USA  (204 posts)  Bio
Date Reply #9 on Thu 03 Aug 2006 09:03 PM (UTC)
Message
Well, I hadn't meant to imply that the current limit wasn't enough for my needs. I did some calculations, and I should be able to serialize a table of about 14000 keys until I hit the limit, which is plenty. I was looking at a maximum of about 5000 keys right now.

Given that we're talking about MUSHclient XML files here, the reading in chunks and buffering Shadowfyr mentioned doesn't make much sense to me. Either you will require all of the data, or none. I can't think of a circumstance in which the program would want to load only 'part' of the file. As Ksilyan said, it's then faster to make fewer big reads, given that there is no reason to make small reads. As we talk about other kinds of files, ie. game files, where only partial data is needed at a time, that makes more sense.

Also, my personal philosophy in these general kind of matters has been to favour giving the user more power to a certain extent, because it is their responsibility. If they wish to load a 50mb file for some reason, they should be allowed to do that. I might put in a warning, like Ksilyan mentioned, but I'd rather give them the capability to screw themselves up, than take that away, and introduce what might become unreasonable limits under certain circumstances.

This applies in this situation because to the average user this is transparent. It's only to the people who write plugins, or have this specialized knowledge that it exists. Obviously if this were as problem that affected the average user, my view would be different, since I have all to much experience in the holes people can dig themselves into without any help, heh.
Top

Posted by Nick Gammon   Australia  (23,120 posts)  Bio   Forum Administrator
Date Reply #10 on Thu 03 Aug 2006 10:05 PM (UTC)
Message
Quote:

The default "load everything" method of file handling never made sense to me as far back as DOS, when "edit" refused to load anything over 64k...


Yes, true, but this is a slightly different. The stuff in the plugin is going to end up in memory anyway, so reading it in in pieces is only shifting the problem a bit.

In this example we have a number of large chunks of memory potentially being used:


  • The XML file

  • The parsed XML in memory

  • The "base64-encoded" variable it ends up in

  • The decompressed data after he decompresses it

  • The Lua tables it gets put into


Reading in the XML file in small pieces only reduces one out of 5 of those.

- Nick Gammon

www.gammon.com.au, www.mushclient.com
Top

Posted by Shadowfyr   USA  (1,788 posts)  Bio
Date Reply #11 on Fri 04 Aug 2006 05:03 PM (UTC)
Message
True, but it does reduce one of them. So, unless you are keeping the file in memory permanently... It doesn't make too much sense to waste the extra space. Well, at some point it stops making sense anyway... This is why most systems, including databases and even the HD use caching techiniques, to read more than needed, so that when you do need it, it doesn't have to load it. But, as you say, in this case it might not do much. Mostly I was thinking in terms of what happens if someone has a lot of those plugins and is already pressing the limit of their system memory, on an older system. For most of us, it hardly matters.
Top

The dates and times for posts above are shown in Universal Co-ordinated Time (UTC).

To show them in your local time you can join the forum, and then set the 'time correction' field in your profile to the number of hours difference between your location and UTC time.


33,215 views.

It is now over 60 days since the last post. This thread is closed.     Refresh page

Go to topic:           Search the forum


[Go to top] top

Information and images on this site are licensed under the Creative Commons Attribution 3.0 Australia License unless stated otherwise.