SockJS and meteor: what if load balancer do not support sticky sessions? -


i'm exploring balancing options meteor. this article looks cool , says following should supported load balance meteor:

  1. mongo optailing. otherwise, may take ten seconds 1 instance of meteor updates another, because polling mongo driver used, polls-and-diffs db each ten seconds.
  2. websocket. it's clear - otherwise clients fallback http , long-polling, work, it's not cool websocket.
  3. sticky sessions 'which required sockjs'. here question comes:

as understood, 'sticky sessions support' assign 1 client same server during session. essential? may happen if don't configure sticky sessions @ all?

here's came myself:

  1. because meteor stores data sent client in memory, if client connects x servers, x times more memory consumed
  2. some minor (or major, if there no oplog) lag may appear same user in, say, different tabs or windows, may surprising.
  3. if sockjs reconnects , wants data persist across reconnections, gonna have bad time. i'm not sure how sockjs works, point valid?

what bad can happen? these 3 points doesn't bad: data valid, available, may @ cost of memory consumption.

basics

sticky sessions required ensure browser's in memory session can managed correctly server.

first let me explain why need sticky sessions:

each publish uses ordinary publish cursor keeps track of whatever collections client may have, when changes knows send down client. apply every meteor app if needs ddp connection. case websockets , sockjs

additionally there may other client session state stored in variables edge cases (e.g store user's state in variable).

the problem happens when server disconnects , reconnects, somehow perhaps connection gets transferred other node (without re-establishing new connection) - has no idea client's data, behaviour turn bit weird.

the issue sockjs & long polling

with sockjs there additional issue. sockjs uses websocket emulation when falls long polling.

with long polling new connection attempt/new http request made every time new data available.

if sticky sessions not enabled each of these connections randomly assigned different node/dynamo.

so have 50% chance (in case random) server has no idea client's ddp session every every time new data available.

it force client re-negotiate connection/ignore clients ddp commands , end getting weird behaviour on client.

half of these wrong node:

enter image description here


Comments

Popular posts from this blog

java - WrongTypeOfReturnValue exception thrown when unit testing using mockito -

php - Magento - Deleted Base url key -

android - How to disable Button if EditText is empty ? -