System Requirements of a write-heavy applications serving hundreds of requests per second
Posted
by
Rolando Cruz
on Server Fault
See other posts from Server Fault
or by Rolando Cruz
Published on 2012-07-27T10:31:06Z
Indexed on
2012/08/29
15:40 UTC
Read the original article
Hit count: 347
NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers.
I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests).
I expect two types of transactions, a local
(not referring to a local
network) and a foreign
transaction. local
transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid
and eventid
. foreign
transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id
, (string) full_name
. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local
and foreign
transactions, but there's only a few bytes of difference anyways in the sizes of these transactions.
As of this moment the userid
may only reach 6 digits and eventid
are 4 to 5-digit integers too. I expect my users
table to have at least 400k rows, and the event
table to have as many as 10k rows, a locality
table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users
table) a day for 3 days a week (1.2M rows a week).
For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat
, DRBD
, master-slave setups. But I wonder if they're really necessary. the users
table will add around 500 1k rows a week.
If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.
© Server Fault or respective owner