Register forum user name Search FAQ

Gammon Forum

Notice: Any messages purporting to come from this site telling you that your password has expired, or that you need to verify your details, confirm your email, resolve issues, making threats, or asking for money, are spam. We do not email users with any such messages. If you have lost your password you can obtain a new one by using the password reset link.

Due to spam on this forum, all posts now need moderator approval.

 Entire forum ➜ MUSHclient ➜ General ➜ Finding TCP/IP Addresses

Finding TCP/IP Addresses

It is now over 60 days since the last post. This thread is closed.     Refresh page


Pages: 1  2 

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #15 on Mon 17 Mar 2008 11:47 PM (UTC)
Message
If you were talking about tcptraceroute it would have helped to say so, and not talk about the traceroute program. :P It wasn't clear if you were talking about something else, or weren't quite correct about how traceroute worked.

Still, I wouldn't be so harsh on the network administrators or Microsoft for making the decisions they did. As Nick said, they are trying to plug holes. Yes, it's an endless battle, but if you never plug any holes, you're going to be even worse off than if you plugged what you can.

For instance I think it makes sense to block most incoming ICMP requests. For the vast majority of desktop computers, I don't see any reason to sit there responding to pings: it's just a vector for denial of service, and (again for the vast majority of desktops) you have no need to ping the machine anyhow. Very, very few people care about tracerouting into a desktop machine. Windows isn't really designed to be fixed over the network, especially from one LAN to another.

Quote:
Won't that apply to ICMP packets too?

Indeed. The new ICMP method works by encoding the route in the message itself, so as it passes through routers they tack themselves on to the route; when it eventually gets discarded (or reaches the destination) it starts having the return route put onto the packet.

So indeed there is no guarantee that the different packets will all take the same route, but presumably the ICMP route information will indicate which route a given packet took.

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Shaun Biggs   USA  (644 posts)  Bio
Date Reply #16 on Tue 18 Mar 2008 12:09 AM (UTC)
Message
Quote:
Remember some years back we had some hickups in the backbone running from here to California on the way to NY, then Europe. Some twit set their router to block ICMP

Eh, Hawaii was blocked off from a good chunk of the Blogoshpere for a bit when some knucklehead set a NAT table wrong in California. That issue happened with blocking TCP/IP, so that is not restricted to ICMP.

Quote:
if you need to fix your neighbors computer, you can't tell the difference between a ICMP block *or* a disconnected modem.

Actually, the disconnected modem has one less LED glowing on the modem. :p And yes, that is an annoyance of a firewall, but firewalls are designed to block things, so I say job well done.

Quote:
For example, there is no guarantee that you're actually tracing a single route to a host; it could just be that different routes got the different TTL packets.

Yeah, the Intarweb is broken. IP actually has no built-in mechanism for any computer to tell where any other computer is. That is what routing tables are for. If a backbone somewhere decides that it will send you to foo instead of bar from now on, that will be the new route. It's one of those things where you just send something out and hope it gets to the correct place.

It is much easier to fight for one's ideals than to live up to them.
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #17 on Tue 18 Mar 2008 12:35 AM (UTC)
Message
Quote:
If a backbone somewhere decides that it will send you to foo instead of bar from now on, that will be the new route. It's one of those things where you just send something out and hope it gets to the correct place.

In fact, that is exactly what happened about a month ago when a Pakistani ISP tried to block YouTube. :P They routed YouTube to a site they ran -- but they broadcasted the information to the whole world, not just their clients... and so, the whole world was trying to contact a Pakistani website thinking it was YouTube. As a result a huge chunk of the Pakistani network was shut down because it couldn't handle the huge volume, and because a higher-up ISP realized that they had to stop broadcasting the bad routing information until they could figure out how to filter it.

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Shadowfyr   USA  (1,790 posts)  Bio
Date Reply #18 on Tue 18 Mar 2008 07:56 PM (UTC)

Amended on Tue 18 Mar 2008 08:01 PM (UTC) by Shadowfyr

Message
Stupid thing being that routing information was added to the spec later on, as businesses got involved, and concluded it was too inefficient to let the network find its own path via wide area broadcasts. The original design was intended to be bullet proof, so that if some key router failed, everything else found a way around. Fixing it to be more efficient actually broke its stability.

Oh, and actually David, I thought saying TCP/IP traceroute was clear enough. I hadn't realized that there was something called IP traceroute, so didn't realize it might be confusing. This is especially true since, despite the program in question being the most well known, there are several of them around.
Top

Posted by Shaun Biggs   USA  (644 posts)  Bio
Date Reply #19 on Tue 18 Mar 2008 08:37 PM (UTC)
Message
Quote:
was too inefficient to let the network find its own path via wide area broadcasts

Well... yes. Would you want to see the Time Warner traffic on the east coast of the U.S. use IP broadcasting every time someone brings up Google? Their backbones probably have that site at the top of the routing list. The only inefficency with the routing tables is that people did not make them dynamic enough. There should be some failsafe there that would request ping times occasionally to make sure that the next servers down the line are all responding. I highly doubt that 99% of the backbones would only have one way to get to any site.

It is much easier to fight for one's ideals than to live up to them.
Top

Posted by Shaun Biggs   USA  (644 posts)  Bio
Date Reply #20 on Tue 18 Mar 2008 08:47 PM (UTC)
Message
Oh bugger, I forgot to reply to the original question on the forum. There are online whois and ping sites that will show IP addresses. This is usefull to use if you get a weird or inconclusive result on your desktop.
http://centralops.net/co/

It is much easier to fight for one's ideals than to live up to them.
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #21 on Tue 18 Mar 2008 11:10 PM (UTC)
Message
Quote:
Stupid thing being that routing information was added to the spec later on, as businesses got involved, and concluded it was too inefficient to let the network find its own path via wide area broadcasts.

I don't think we are talking about the same thing. I was talking about publishing prefix-to-IP mappings for the CIDR system, not a map of hops to follow.

I just get a little edgy when you label these decisions "stupid" and so forth because it's a very complex system and wasn't really designed for the current usage; networking in general is a remarkably complex beast. It's not exactly fair to labeling the people who work in it "stupid" just because it doesn't do exactly what you want it to do. :-/

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Shadowfyr   USA  (1,790 posts)  Bio
Date Reply #22 on Thu 20 Mar 2008 02:33 AM (UTC)

Amended on Thu 20 Mar 2008 02:42 AM (UTC) by Shadowfyr

Message
Yes, we are talking about different things. I was **not** referring to your post about adding actual path data to the ping/tracert. As for stupid. Its not stupid to want to make the system more efficient, it ***is*** stupid how they, all too often, set up the systems that do that such that no alternate route is allowed, unless some massive delay happens, and *other* systems inform their routers that they can get there some other way. That is the key problem with the system as it stands. I have had 1-2 times where the direct path via Nevada failed, due to flooding, and I couldn't get to some place in California, *but* I could connect via the Pheonix network path to some place that was physically less than 50 miles away from the first one, also in California. That *is* stupid. And, to make it even dumber, nothing in that chain was smart enough to find an alternate path for anything at all that went via Nevada. I find this quite ridiculous. Its like if your own phone stopped being able to call your neighbors phone, due to someone a block away hitting a phone pole, yet, somehow, you could both call in to complain to the phone company about the problem. If you can both hit a central location, from both paths, not being able to get to anything on the other path is... incomprehensible, unless the reason is because someone has been busy hardwiring the pathing information into the system, in such a fashion that no alternates *can* be found. What term would you prefer I use to describe this overzealous optimizing? lol

EDIT- Normal problems resolve, even in bad cases, within minutes, or maybe hours. The case I ran into here lasted for two days. Literally half the country was cut off from me, despite the fact that half the places I *could* get to where in the same states, or even sometimes the same cities, as the ones I *couldn't*. Oh, and in one case, Northern California was lost to me, due to work being done in LA, for 3-4 months. One could get to Oregon, Washington, etc. from both me, and from there, or from those states to N. California, but not from me to there. I.e., a route "existed" which still linked the places I could get to, to the ones I couldn't, but the system was "optimized" in that collection of routers on both paths to *ignore* the solution, since key locations had, "If you are trying to get to IPs X through Y, always take path Q, never Z."
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #23 on Thu 20 Mar 2008 04:51 AM (UTC)
Message
Maybe you should redesign the Internet and make it work better, then. :-) I'm sure it can't be all that difficult. . .

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Shadowfyr   USA  (1,790 posts)  Bio
Date Reply #24 on Fri 21 Mar 2008 02:44 AM (UTC)
Message
Well, then you run into the problem that MS is with IE8. lol Seriously though, someone is already doing that. One supposes they know the kinds of mistakes made in the existing implementation and have come up with ideas to address them.

Oh, and to be clear, I didn't mean the people that developed any of it where stupid. Too interested in pragmatic solutions to short term problems, instead of thinking of the long term impact, maybe. But a lot of people fall for that and not everyone can/does/has been taught to look for possible problems. I have a few times got in trouble for pointing out what I considered obvious possible problems in an implementation of something, including recently with some code, where the problem was real in an environment like Mushclient, but where the environment it ran in, in this case, parsed the data passing between the functions. I.e., the data was "changed" by the C++ between calls, so that the code did work, despite the fact that, not knowing that, there was no way in hell it should have. I look for possible problems, and try to think of what the solution might need to be to prevent disaster. A lot of coders, even experts, look only towards a) the deadline, b) compromises, and c) code that is functional when nothing goes wrong. I find this way of looking at code to be difficult to comprehend when anyone but myself has to rely on it.

I suppose, I am a standards person in that respect. For an interesting look at the mess you get when people set standards, but provide no way to test against them, so pragmatists get involved, then someone comes along and decides to enforce standards again, there is this article:

http://www.joelonsoftware.com/items/2008/03/17.html

They would have been better off imho, rewriting the standard entirely, like they discussed with the new internet standards. I can understand both sides, to an extent, but if you can't provide something to test against, your own standard had bloody well better include a pragmatic approach in its design. ;)
Top

The dates and times for posts above are shown in Universal Co-ordinated Time (UTC).

To show them in your local time you can join the forum, and then set the 'time correction' field in your profile to the number of hours difference between your location and UTC time.


72,971 views.

This is page 2, subject is 2 pages long:  [Previous page]  1  2 

It is now over 60 days since the last post. This thread is closed.     Refresh page

Go to topic:           Search the forum


[Go to top] top

Information and images on this site are licensed under the Creative Commons Attribution 3.0 Australia License unless stated otherwise.