Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 144/389 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • What is the preferred way to update database schemas in multiple production environments

    - by rmarimon
    I am about to install some 20 servers with the same web application in multiple locations connected to their own local database. I will be updating the web applications remotely (perhaps using debian's package manager) and I'm sure will eventually need to update the database schemas. Since each server could be eventually be using a different release of the web application, I need a way to apply the incremental changes to the servers. I'm thinking something like this. Let's start with database.schema.1 as the original release of the database and assume this number increases with each new version of the schema. I eventually could end up with database.schema.17 as the current release. For a new installation this would be the schema to install. It seems to me that I would need consecutive translations like database.translation.1.2 which would convert database.schema.1 into database.schema.2, database.translation.2.3 to convert from 2 to 3 and so on until 17. It seems that whenever I change a schema I need to alter the database but perhaps I need to run some script to update the data which might be done with SQL but might require an external non sql script. What is the appropriate way to organize all these files? What is the automatic way to apply those upgrades to the schema? Where do I store the current version number of the schema?

    Read the article

  • php import to mysql hosted on godaddy

    - by julio
    Yeah, I know! It's not my choice. I am doing a large data import using a PHP script into a mysql DB hosted on godaddy. It seems their mysql connection gets killed every few hours regardless of what work it's doing. Their tech support is useless, and I've exhausted myself writing attempted workarounds. Right now, I'm trying to do a mysql_ping every few minutes, and if the ping returns false, I attempt to open up a new db connection. My script (which takes many hours to complete), keeps failing with the very unhelpful message of "mysql server has gone away". I understand mysql trying to close a connection that's been open too long, but the connection is not idle-- it's busy basically the whole time, and with the pings I've written in, it should not be idle longer than 5 minutes at most at any time. (These same scripts work with no errors on Amazon AWS servers, my local servers, etc.) Any help most appreciated! I'm about to give up.

    Read the article

  • Full complete MySQL database replication? Ideas? What do people do?

    - by mauriciopastrana
    Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate). The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data. Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). I am just wondering what people out there do. The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.

    Read the article

  • Apache and multiple tomcats proxy

    - by Sebb77
    I have 1 apache server and two tomcat servers with two different applications. I want to use the apache as a proxy so that the user can access the application from the same url using different paths. e.g.: localhost/app1 --> localhost:8080/app1 localhost/app2 --> localhost:8181/app2 I tried all 3 mod proxy of apache (mod_jk, mod_proxy_http and mod_proxy_ajp) but the first application is working, whilst the second is not accessible. This is the apache configuration I'm using: ProxyPassMatch ^(/.*\.gif)$ ! ProxyPassMatch ^(/.*\.css)$ ! ProxyPassMatch ^(/.*\.png)$ ! ProxyPassMatch ^(/.*\.js)$ ! ProxyPassMatch ^(/.*\.jpeg)$ ! ProxyPassMatch ^(/.*\.jpg)$ ! ProxyRequests Off ProxyPass /app1 ajp://localhost:8009/ ProxyPassReverse /app1 ajp://localhost:8009/ ProxyPass /app2 ajp://localhost:8909/ ProxyPassReverse /app2 ajp://localhost:8909/ With the above, I manage to view the tomcat root application using localhost/app1, but I get "Service Temporarily Unavailable" (apache error) when accessing app2. I need to keep the tomcat servers separate because I need to restart one of the applications often and it is not an option to save both apps on the same tomcat. Can someone point me out what I'm doing wrong? Thank you all.

    Read the article

  • Windows Azure - Automatic Load Balancing - partitioning

    - by veda
    I was going through some videos. I found that Windows Azure will group the blobs into partitions based on the partition key and will Automatically Load Balance these partitions on their servers. The partition key for a blob is blob name. Using the blob name, azure will automatically do partitions. Now, My question is that Can I able to make the azure to do partitions based on the Container Name. I wanted my partition key to be container name. For example, I have a storage account. In that I have 2 containers named container1 and container2. In container1, I have 1000 files named 1.txt, 2.txt, 3.txt, ......., 501.txt, 502.txt, ..... 999.txt, 1000.txt and in container2, I have another 1000 files named 1001.txt, 1002.txt, 1003.txt, ......., 1501.txt, 1502.txt, ..... 1999.txt, 2000.txt Now, Will Windows Azure will generate 2000 partitions based on the blob name and serve me through several servers??? Won't it be better if Azure partitions based on the Container name? container1 on one server and conatiner2 on another.

    Read the article

  • Java - Problem in deploying Web Application

    - by Yatendra Goel
    I have built a Java Web Application and packed it in a .war file and tested it on my local tomcat server and it is running fine. But when I deployed it on my client's server, it is showing an error. According to the remote server (my client's server), it is not finding a tld file packed in a jar file which I had placed in WEB-INF/lib directory. But when I checked the WEB-INF/lib directory for the jar file, i found that it was there. The contents of META-INF/MANIFEST.MF is as follows: Manifest-Version: 1.0 Class-Path: I think that there is no need to explicitly mention the classpath of WEB-INF/lib directory as it is in the classpath of any web application by default. Then, why the server can't find the jar file in the lib directory when I deployed it on a remote server and why it is working when I deployed the same application on my local server. I posted a question for this at http://stackoverflow.com/questions/2441254/struts-1-struts-taglib-jar-is-not-being-found-by-my-web-application but found that the problem is unusual as nobody could answer it. So my questions are as follows: Q1. Is WEB-INF/lib still remains on the classpath if I leave the classpath entry blank as shown above in the MANIFEST.MF file or I should delete the classpath entry completely from the file or I should explicitly enter Class-Path: /WEB-INF/lib as the classpath entry? Q2. I have JSP pages, Servlets and some helper classes in the web application. Jsp pages are located at the root. Servlets and helper classes are located in WEB-INF/classes folder. So Is there any problem if my helper classes are located in the WEB-INF/classes folder? Note: Please note that this question is not same as my previous question. It is a follow-up question of my previous question. Both the servers (local and remote) are tomcat servers.

    Read the article

  • Good tools for keeping the content in test/staging/live environments synchronized

    - by David Stratton
    I'm looking for recommendations on automated folder synchronization tools to keep the content in our three environments synchronized automatically. Specifically, we have several applications where a user can upload content (via a File Upload page or a similar mechanism), such as images, pdf files, word documents, etc. In the past, we had the user doing this to our live server, and as a result, our test and staging servers had to be manually synchronized. Going forward, we will have them upload content to the staging server, and we would like some software to automatically copy the files off to the test and live servers EITHER on a scheduled basis OR as the files get uploaded. I was planning on writing my own component, and either set it up as a scheduled task, or use a FileSystemWatcher, but it occurred to me that this has probably already been done, and I might be better off with some sort of synchronization tool that already exists. On our web site, there are a limited number of folders that we want to keep synchronized. In these folders, it is an all or nothing - we want to make sure the folders are EXACT duplicates. This should make it fairly straightforward, and I would think that any software that can synchronize folders would be OK, except that we also would like the software to log changes. (This rules out simple BATCH files.) So I'm curious, if you have a similar environment, how did you solve the challenge of keeping everything synchronized. Are you aware of a tool that is reliable, and will meet our needs? If not, do you have a recommendation for something that will come close, or better yet, an open source solution where we can get the code and modify it as needed? (preferably .NET). Added Also, I DID google this first, but there are so many options, I am interested mostly in knowing what actually works well vs what they SAY works, which is why I'm asking here.

    Read the article

  • IIS Flash Remoting Request Limit introduced with Coldfusion 9

    - by ciaranarcher
    Hi all We've just been our Coldfusion servers from Enterprise CF 8.01 to CF 9. They are running Win 2008. We ran into trouble on those servers that provide the Flash remoting back-end for a Flex application we provide. Once the CF 9 upgrade was complete we noticed that during busy times when many Flex clients were connecting, we appeared to have a hard limit of 25 Flash Remoting Requests running, despite having much higher limits (in fact 150) set in CF Admin. Initially we thought that this was an issue with the fact that Blaze DS was now bundled with CF 9 (rather than a separate install) so we decided to roll-back the CF 9 installation. This, unfortunately, didn't work and we were still stuck with out hard limit of 25 Flash Remoting requests. Then looking at IIS we noticed that the CF9 ISAPI filter was still installed (after we had ran the Web Service Configuration part of the install). That was removed and the CF 8 one was re-run and all of a sudden the Flash Remoting hard limit disappeared. So it seems that it might have had something to do with the wsconfig of CF 9 (C:\ColdFusion9\runtime\bin\wsconfig.exe) Has anyone else had this problem, or does anybody know of where these hard limits are configured in IIS? Any and all help appreciated!

    Read the article

  • Credit Card storage solution

    - by jtnire
    Hi Everyone, I'm developing a solution that is designed to store membership details, as well as credit card details. I'm trying to comply with PCI DSS as much as I can. Here is my design so far: PAN = Primary account number == long number on credit card Server A is a remote server. It stores all membership details (Names, Address etc..) and provides indivudal Key A's for each PAN stored Server B is a local server, and actually holds the encrypted PANs, as well as Key B, and does the decryption. To get a PAN, the client has to authenticate with BOTH servers, ask Server A for the respective Key A, then give Key A to server B, which will return the PAN to the client (provided authentication was sucessful). Server A will only ever encrypt Key A with Server B's public Key, as it will have it beforehand. Server B will probably have to send a salt first though, however I doin't think that has to be encrypted I havn't really thought about any implementation (i.e. coding) specifics yet regarding the above, however the solution is using Java's Cajo framework (wrapper for RMI) so that is how the servers will communicate with each other (Currently, membership details are transfered in this way). The reason why I want Server B to do the decryption, and not the client, is that I am afraid of decryption keys going into the client's RAM, even though it's probably just as bad on the server... Can anyone see anything wrong with the above design? It doesn't matter if the above has to be changed. Thanks jtnire

    Read the article

  • jquery append a tr after calling

    - by marharépa
    Hello! I've got a table. <table id="servers" ...> ... {section name=i loop=$ownsites} <tr id="site_id_{$ownsites[i].id}"> ... <td>{$ownsites[i].phone}</td> <td class="icon"><a id="{$ownsites[i].id}" onClick="return makedeleterow(this.getAttribute('id'));" ...></a></td> </tr> {/section} <tbody> </table> And this java script. <script type="text/javascript"> function makedeleterow(id) { $('#delete').remove(); $('#servers').append($(document.createElement("tr")).attr({id: "delete"})); $('#delete').append($(document.createElement("td")).attr({colspan: "9", id: "deleter"})); $('#deleter').text('Biztosan törölni szeretnéd ezt a weblapod?'); $('#deleter').append($(document.createElement("input")).attr({type: "submit", id: id, onClick: "return truedeleterow(this.getAttribute('id'))"})); $('#deleter').append($(document.createElement("input")).attr({type: "hidden", name: "website_del", value: id})); } </script> It's workin fine, it makes a TR after the table's last tr and put the info to it, and the delete also works fine. But i'd like to make this AFTER the tr which calling the script. How can i do this? I've tried it with closest() but not work :(

    Read the article

  • SqlServer slow on production environment

    - by Lieven Cardoen
    I have a weird problem in a production environment at a customer. I can't give any details on the infrastructure except that sql server runs on a virtual server and the data, log and filestream file are on another storage server (data and filestream together and log on a seperate server). Now, there's this query that when we run it gives these durations (first we clear the cache): 300ms, 20ms, 15ms, 17ms,... First time it takes longer, but from then on it is cached. At the customer, on a sql server that is more powerfull, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms, 2600ms, 2400ms, ... The query can be improved, that's right, but that's not the question here. How would you tackle this? I don't know where to go from here. The servers at this customer are really more powerfull but they do have virtual servers (we don't). What could be the cause... - Not enough memory? - Fragmentation? - Physical storage? I know it can be a lot of things, but maybe some of you have got some info for me on how to go on with this issue...

    Read the article

  • Tool to migrate a repo with only read access?

    - by Corazu
    I have a subversion repository located on our school servers from a project my friends and I worked on the past semester. We want to take the project and polish it and make it more of a portfolio item and I'm not quite sure that the repo will stay intact on the school servers (they usually wipe it after a few semesters). I don't have access to dump the repo, although I've sent an email to see if I can get one - which would be the best solution. I tried svnsync but the school server doesn't support the replay command (probably turned off), so that won't work for me. Now, theoretically, wouldn't it be possible to (manually or programmatically) checkout each revision of the repository and check it in to a new repository on our own server? I think it would work - there's only 3 of us, and there's no working copy conflicts that we'd have to worry about, just want a copy of all the history in the repo in case we need to go back and look at it. That being said, before I go and reinvent a wheel by writing a script to do it - does something like this already exist? I have to figure there's already a tool to do it, or that someone has wrote a script to do it.

    Read the article

  • Sync version of the project with Java

    - by Alexadar
    I have a problem, I need to sync version of the project. Changes are performed on main project that resides on central Coldfusion server. There are about 30 remote Coldfusion servers, that has to sync with latest version on central server. New synch application will to be done in Java! I have a direct link to each remote location. Mostly Windows OS and Windows tools are used. My idea: The synchronization is performed at the level of synchronization of folders. I would like to avoid the use of SVN and similar tools. My idea is to carry out comparison of folders on the local server (comparison is performed between the old and new versions),with NO communication with remote servers, make the difference between the versions and the same is sent to the remote machines. We will install Tomcat on every remote server. "Sync application" deployed on Tomcat will take care about differences. Remote server will return me an answer, about the success of the sync. Any kind of suggestion or completely new approach on this topic is more than welcome. Thanks Best regards

    Read the article

  • Send Special Keys to Gtk.VteTerminal

    - by Ubersoldat
    Hi I have this OSS Project called Monocaffe connections manager which uses the Gtk.VteTerminal widget from PyGTK. A nice feature is that it allows the users to send commands to different servers' consoles (cluster mode) using a Gtk.TextView for the input. The way I send key strokes to each Gtk.VteTerminal is by using the feed_child method. For common keys there's no problem: I simply feed what the TextView receives to all the terminals, but when doing so with special keys I get into a little trouble. For "Return" I catch the event and feed the terminal a '\n'. For back-space is the same, catch the event and feed a '\b'. def cluster_backspace(self, widget): return self.cluster_send_key('\b') The problem comes with other keys like Tab, Arrows, Esc which I don't know how to feed as str to the terminal to recognize them. In the case of Esc is a real pain, because the users can edit the same file on different servers using vi, but cannot escape insert mode. Anyway, I'm not looking for a complete solution, just ideas since I've ran out of them. Thanks.

    Read the article

  • Cross-Origin Resource Sharing (CORS) - am I missing something here?

    - by David Semeria
    I was reading about CORS (https://developer.mozilla.org/en/HTTP_access_control) and I think the implementation is both simple and effective. However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine. But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information. I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access. So the expanded sequence would be: 1) Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc) 2) Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal 3) Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list. I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole. I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.

    Read the article

  • How to set up a load/stress test for a web site?

    - by Ryan
    I've been tasked with stress/load testing our company web site out of the blue and know nothing about doing so. Every search I make on google for "how to load test a web site" just comes back with various companies and software to physically do the load testing. For now I'm more interested in how to actually go about setting up a load test like what I should take into account prior to load testing, what pages within my site I should be testing load against and what things I'm going to want to monitor when doing the test. Our web site is on a multi-tier system complete with a separate database server (IIS 7 Web Server, SQL Server 2000 db). I imagine I'd want to monitor both the web server and the database server for testing load however when setting up scenarios to load test the web server I'd have to use pages that query the database to see any load on the database server at the same time. Are web servers and database servers generally tested simultaneously or are they done as separate tests? As you can see I'm pretty clueless as to the whole operation so any incite as to how to go about this would be very helpful. FYI I have been tinkering with Pylot and was able to create and run a scenario against our site but I'm not sure what I should be looking for in the results or if the scenario I created is even a scenario worth measuring for our site. Thanks in advance.

    Read the article

  • WN server filter won't work

    - by Mike Fink
    WN servers have an alternative to cgi programs called filters. I have been trying to get one to work, but I have had no luck. I am writing in python. It looks like the server is not receiving any output from the program but is parsing nothing and wrapping this nothing in my standard header and footer. I have chmod 755 the program and my index.wn file reads: Default-Attributes=parse Default-Wrappers=templates/template1.inc File=includeTests.html File=index.html File=archives.html File=contact.html File=style.css File=testProgram.py #here is the stuff about the filter File=testFilter.html Content-type=text/html Filter=testProgram.py Attributes=parse, cgi here is what is in the filter called testProgram.py: #!/usr/bin/python print "Content-Type: text/html\n\n" print "hi" testProgram.py works perfectly if it is shoved into a cgi-bin folder and chmoded. I suppose my problem may lay with the fact that I have never ever seen a filter program in python. I'm not sure I have even seen a filter program at all. Does anyone out there have any experience with wn servers and filters? Any ideas?

    Read the article

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • ASP.NET 2.0 in Virtual Trying to Use SQL State Server

    - by user251660
    We have IIS 6 running on a W2003 Server. The root web site is running a v1.1 site. Under this site we have a virtual running a v2.0 site (with a separate application pool). The web.config for the root site is using SQL as its state server and has a 1.1 SQL state server database installed. The 2.0 virtual web.config does not need state and its web.config has no reference to a state server. When we attempt to call the virtual we receive this error message. "Unable to use SQL Server because ASP.NET version 2.0 Session State is not installed on the SQL server. Please install ASP.NET Session State SQL Server version 2.0 or above. This issue is currently only occurring on one web server. The rest are able to run the 2.0 virtual application. I also notice that if we call the 2.0 virtual with the IP address it does not generate the error, however if we call it with the host header name it generates the error (this behavior is only on the 1 web server with the error, all the others can be called with either the ip or host header without error). As an additional note the root and virtual are running with SSL. My theory is that the virtual 2.0 application is inheriting the 1.1 web.config state server entry from the root and when it looks at the state server it sees it as a 1.1 version and reports the error that it needs a 2.0 state server. I however cannot understand why the other servers are not behaving in this matter. All of the servers are on the same OS service pack as well as the same version of .net framework. Any ideas? Thanks

    Read the article

  • Error handling approach on PHP

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • S3 file Uploading from Mac app though PHP?

    - by Ilija Tovilo
    I have asked this question before, but it was deleted due too little information. I'll try to be more concrete this time. I have an Objective-C mac application, which should allow users to upload files to S3-storage. The s3 storage is mine, the users don't have an Amazon account. Until now, the files were uploaded directly to the amazon servers. After thinking some more about it, it wasn't really a great concept, regarding security and flexibility. I want to add a server in between. The user should authenticate with my server, the server would open a session if the authentication was successful, and the file-sharing could begin. Now my question. I want to upload the files to S3. One option would be to make a POST-request and wait until the server would receive the file. Problems here are, that there would be a delay, when the file is being uploaded from my server to the S3 servers, and it would double the uploading time. Best would be, if I could validate the request, and then redirecting it, so the client uploads it directly to the s3-storage. Not sure if this is possible somehow. Uploading directly to S3 doesn't seem to be very smart. After looking into other apps like Droplr and Dropmark, it looks like they don't do this. Btw. I did this using Little Snitch. They have their api on their own web-server, and that's it. Could someone clear things up for me? EDIT How should I transmit my files to S3? Is there a way to "forward" it, or do I have to upload it to my server and then upload it from there to S3? Like I said, other apps can do this efficiently and without the need of communicating with S3 directly.

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

  • How do I see if an established socket is stuck on a server that's expecting input?

    - by Parker
    I have a script that scans ports for open proxy servers. Problem is if it encounters a login program (specifically telnet) then it hangs there forever since it doesn't know what to do and eventually the server closes the connection. The simple solution would be to create a bunch of cases. If telnet, do this. If SSH, do that. If something else, blah blah blah. I'd like an umbrella solution since the script is not a high priority for me. The script, as it is now, is available at http://parkrrr.net/socks/scan.phps On a small scale (the page maybe averages 15 hits/day) it's fine but on a larger scale I'd be worried about a lot of open zombie sockets. Swapping the !$strpos doesn't work since servers can return more information than what you requested (headers, ads, etc). Only accepting a fixed number of bytes (as opposed to appending until EOF, which it does now) from the $fgets also does not seem to work. I am sure this is where it gets stuck: while (!feof($fp)) { $data.=fgets($fp,512); } But what can I do? Any other suggestions/warnings would also be welcomed.

    Read the article

  • Processing a database queue across multiple threads - design advice

    - by rwmnau
    I have a SQL Server table full of orders that my program needs to "follow up" on (call a webservice to see if something has been done with them). My application is multi-threaded, and could have instances running on multiple servers. Currently, every so often (on a Threading timer), the process selects 100 rows, at random (ORDER BY NEWID()), from the list of "unconfirmed" orders and checks them, marking off any that come back successfully. The problem is that there's a lot of overlap between the threads, and between the different processes, and their's no guarantee that a new order will get checked any time soon. Also, some orders will never be "confirmed" and are dead, which means that they get in the way of orders that need to be confirmed, slowing the process down if I keep selecting them over and over. What I'd prefer is that all outstanding orders get checked, systematically. I can think of two easy ways do this: The application fetches one order to check at a time, passing in the last order it checked as a parameter, and SQL Server hands back the next order that's unconfirmed. More database calls, but this ensures that every order is checked in a reasonable timeframe. However, different servers may re-check the same order in succession, needlessly. The SQL Server keeps track of the last order it asked a process to check up on, maybe in a table, and gives a unique order to every request, incrementing its counter. This involves storing the last order somewhere in SQL, which I wanted to avoid, but it also ensures that threads won't needlessly check the same orders at the same time Are there any other ideas I'm missing? Does this even make sense? Let me know if I need some clarification.

    Read the article

  • Error monitoring/handling on webservers

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >