We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told.
We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :(
We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power.
The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time.
We then have multiple data centers and users all hitting the same database with this same approach.
Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at.
The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us.
Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running.
What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system?
I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need?
We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense?
Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated.
Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be?
I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot.
HELP LOL!