A Plan for Improving Escargot Infrastructure: Distrbuting Frontends to Different Servers


#1

Today, I had a thought that stuck to me for a while. It first started with the Yahoo! frontend, which I made bind to the fallback ports, all expect for port 80, which the server was already using for HTTP communication, and then it branched out to overall network traffic and how it’d make sense to balance it out somehow. Regardless, both ideas led to this idea: finally distributing Escargot onto several servers.

How it’d work is something like this. Let’s say we have 16 servers, ranging from letters A - P, each with certain frontends enabled and certain others disabled:

  • Servers A - E host a dispatch server for the MSN frontend.
  • Servers F - I host notifications servers for the MSN frontend, but Servers G and I also host the Yahoo! frontend.
  • Servers J - P host the necessary switchboards for the MSN frontend.

When you connect to m1.escargot.log1p.xyz on the MSN port, for instance, you’ll be taken to one of the servers from A - E, which during the authentication process, will transfer you to one of the servers from F - I, which are the notification servers listed earlier. When you connect to one of those notification servers, if they aren’t full, you’ll be able to continue the authentication process and log on to MSN. Then when you want to request a switchboard server to initiate a chat session, an IP to one of the servers from J - P will be sent in reply.

If you connect on the Yahoo! port, you’ll be directly taken to either servers G or I, which will let you authenticate to the Yahoo! service and allow you to see your contacts’ presence and talk to them in that one session.

In regards to HTTP servers, those will be separate from the frontend-specific servers and act independently. However, they will still host services for specific frontends.

Now that the plan’s been laid out, now there’s federating things like user presence, messages, and session data (e.g, authentication tokens), I don’t exactly know how to do this, as I haven’t had much experience with networking, but rest assured, either I or @valtron will find something that helps with our federation needs. There’s also switching database engines, since SQLite wasn’t intended for remote use, as all of these servers will need a way to access user data in one central location, and either way, it probably won’t keep up with Escargot’s slowly-but-surely growing popularity (@valtron told me that Escargot’s SQLite database file is 150MB!)

When we get to implementing this kind of architecture, it’d be worth the hard work, and it’d improve Escargot’s performance a lot. And lower the amount of booting we have to endure. :stuck_out_tongue:


#2

nice. will windows xp be fixed so you don’t have to click log in over and over until you finally connect


#3

This sounds like an awesome idea to implement into escargot! I wonder if using a Raspberry pi to help reduce traffic in areas where the traffic is more intensive or to just help balance out the flow of input. (Example i setup a raspberry pi running what ever code you use, and use it in Missouri and you help me connect it to said front end and redirect traffic from Missouri to Frontend to not overflow that Frontend? IDK it’s just an idea, i also think that the raspberry pi has a 1GB Ethernet port that i could use, and i could make a cluster for under $100 to help boost the amount it could handle/compute.)


#4

did you send this as a request on msn-server?


#5

its cool, but isnt it too ambitious?


#6

I e-mailed valtron directly. :stuck_out_tongue:


#7

We don’t plan to make this a thing at the moment, but it is something we’re looking forward to implementing. We already have a major thing that’s in the works (WLM 2009), so when that’s finally done with, we’ll look into distributing the architecture.


#8

@Jarhead_Gamer38 Unfortunately, no, this won’t solve that issue, as that’s a combination of the HTTPS setup Escargot utilizes, and the fact that IE defaults to SSL I think. That’ll have to be dealt with separately.


#9

That error not always appear to me on xp when i sue wlm :stuck_out_tongue:


#10

It’s complex to do that, and a little bit more on a working server, but at the end it is worth, various servers can distribute the resources in a more efficient way and can do the things easily, also you can do backup servers so if a fail is presented, there will be no problem in use it, is hard but not impossible :wink:


#11

Do we really need to scale right now, though?


#12

Usually we have people in the thousands at this time, and sometimes the server either delivers messages poorly or doesn’t accept new connections. Also, with the introduction of new frontends, there’s the possibility of them having conflicting ports, which is why I thought it’d make sense to consider distributing servers sooner. Also, this isn’t going to happen right away. After the two major updates are deployed (WLM 2009 support and Yahoo! Messenger frontend), me and @valtron will think it over.


#13

animo !!! :guiño:


#14

:wink:


#15

Will that make WLM 2009 easier to work with?


#16

No. This has nothing to do with or has any relation to the development of support for WLM 2009. Distributing Escargot to several servers won’t magically make WLM 2009 support appear. That requires its own hard work and attention to detail, which takes a lot of time, especially in WLM 2009’s case


#17

fucking told him he fell for it on discord when i saw it


#18

If you are to make this happen, I think making one of the first servers in the Midwest to equalize the speed and latency of the messages, unlike having one in the West Coast which would benefit them, but wouldn’t really benefit the East Coast. IDK just an idea to consider, if it’d make a difference that is (for the first extra server in the U.S.). Anyways i’m happy that people are on board with this idea. Good luck! (I’d like to help if possible, sadly in a novice so yeah.)