Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 30/393 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Federated (Synced) Subversion servers?

    - by Adam Haile
    Is it possible to create "federated" Subversion servers? As in one server at location A and another at location B that sync up their local versions of the repository automatically. That way when someone at either location interacts with the repository they are accessing their respective local server and therefore has faster response times.

    Read the article

  • bounce multiple servers using ant

    - by Angrezy
    Hi All, This is restart target code which is defined in build.xml target name="restart" propertycopy name="remote.host" from="deploy.${target.env}.host.${remote.id}" propertycopy name="remote.port" from="deploy.${target.env}.port.${remote.id}" sshexec trust="true" host="${remote.host}" port="${remote.port}" username="${scm.user}" keyfile="${scm.user.key}" command="sudo /usr/local/bin/bounce_jboss" target server information is defined in build.properties. The above code is working fine, but the restarting process is very late bcas its stopping-starting server one and later its stopping-starting another server, Is there a way where i can restart both servers parallely with a time frame of 45 seconds.

    Read the article

  • More RAM vs. more servers [closed]

    - by user357972
    I was recently asked "Do you know when to decide between going for more RAM or more servers?" (in the context of scaling data mining applications). I had no idea, so what are some ways to decide? I have very little knowledge of architecture and scaling (my understanding of computer memory and what a server does is limited to the high-level basics), so tips on learning more about these things in general are also very welcome.

    Read the article

  • What is Best storage servers infrastructure ? DAS/NAS/SAN or installing GlusterFS/LUSTER/HDFS/RBDB

    - by TORr0t
    I am trying to design an infrastucture for the project I am working on. It would be somehow a file-sharing/downloading project (like rapidshare) and I would need high storage sizes and good scability, and I would add new storage nodes after my project grows up. I have come up with 3 solutions for my project which are using Luster, GlusterFS, HDFS, RDBD. For start, i would have 2 servers, one server is for glusterfs client + webserver + db server+ a streaming server, and the other server is gluster storage node. (After sometime, i would be adding more node servers, and client servers (dont know how many new client new servers to add, will see later) So, i am thinking to work with glusterfs. But i really wonder that if i have to use high performance servers with high sotrage sizes or avarage/slow servers with high storage sizes? Or nas/das/san solutions are better for glusterfs storage nodes? I might buy a nas and install glusterfs onto it. I would be happy to listen to your recommendations for the server properties (for each clients and nodes) . I really dont know if I really need high amount of ram and good cpus to for the nodes. I am sure i need it for client servers. The files would be streamed as well, so the Automatic file replication is important, thus, my system should work like a cloud, when needed, according to high traffic, the storage nodes should copy the most demanded file to be streamed and would help me to get rid of scability problems and my visitors would able to stream/download those files. Also, i am open to your experiences/thoughts about any good solution. Luster, hdfs, rbdb are the other options and i would be happy to listen to your thoughts here. I would be very very happy to hear back from anyone commented of any words I have used here. Thanks

    Read the article

  • How many different servers are needed to keep a website running with no downtime? [closed]

    - by Mason Wheeler
    Machines go down. It's a fact of life. They may need to be rebooted for some reason, or they may have a hardware failure, or a power outage. So if I wanted to deploy a website with a server backed by a SQL database, putting the whole thing on one server wouldn't be good enough. It obviously needs at least two servers, so that if one goes down, the other can pick up the slack until the first comes back up. Of course, if I have the server software on two machines, either one of which could go down, I can't place the database on either of those two machines, because it could go down. So the database needs its own server. But that server can go down, so I need a backup database server and some sort of replication system to keep it in sync so the main can fail-over to it. So far, that's a bare minimum of 4 machines to keep one website running with a reasonable chance of no downtime. (Assuming no catastrophic events take place that take down both front-end servers at once or both DB servers at once, and no hacks, DDOS attacks, etc. Am I missing any other factors, or should I consider 4 servers to be the minimum for running a website with a goal of continuing operation without downtime even when a server goes down?

    Read the article

  • Best server-side javascript servers.

    - by fmsf
    Hey, I've been wondering to try out server-side javascript for a while. And I'm finding a good amount of servers, like: Node.js Rhino SpiderMonkey among others. Could anyone with experience on server-side javascript, tell me which are the best engines? and why? I like the Node.js because it's based on Google's V8 engine. And seems easy to use. But some feedback on what you would choose would be great. Edit: Some benchmarks for Node. I'm thinking on going with this one but feedback is still welcome. Thanks

    Read the article

  • Running Perl Scripts on servers that don't have the modules

    - by envinyater
    I need to run a perl script to gather system information that will be deployed and executed on different unix servers. Right now I am writing it and testing it, and I'm receiving this error. Can't locate XML/DOM.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at test.pl line 7. BEGIN failed--compilation aborted at test.pl line 7. So I am simply using XML::DOM which should be part of Perl but it isn't for this version on this particular server which is 5.10.1. Anyways, is there a way I can create and design my script and package modules into it while keeping the .pl extension, which is the requirement for this script?

    Read the article

  • Make two servers talk to each other

    - by Maksim
    I have application written in GWT and hosted on Google AppEngine/Java. In this application user will have an option to upload video/audio/text file to the server. Those files could be big, up to 1gb or so and because GAE/J does not support large file I have to use another server to store those files. This would be easy to implement if there was no cross-domain security feature in browsers. So, what I'm thinking is to make GAE Server talk to my server (Glassfish or any other java servers if needed) to tell url to the file and if possible send status of uploaded file (how many percent was uploaded) so I can show status on clients screen. Here is what I'm thinking to do. When user loads GWT page that is stored on GAE/J he/she will upload file to my server, then my server will send response back to GAE and GAE will send response to the client. If this scenario is possible what would be the best way to implement GAE to Glassfish conversation?

    Read the article

  • Visual Studio - 'Browse UDDI Servers' -> 404 ?

    - by southof40
    Hi - I have a ASP.Net application which implements a web service. Within the ASP.Net application there's a test script which consumes the web service and it all works etc. I have built a .NET console application and want to 'Add a Web Reference' so that the console app can consume the web service provided by the ASP.NET application. When I use the 'Browse UDDI Servers on the local network' to do that any plausible URL I use results in a 404. I'm guessing I need to do something to my ASP.Net application so that it acts as an UDDI server ? Does anyone know what ? Update I just wanted to clarify something - I'm not desperate to use UDDI it just seems that's the only option in my circumstances which are : I'm actually doing this for another developer who is used to using Visual Studio to do this stuff The other developers system will need to run on another machine within the same network.

    Read the article

  • W2k8 RC1: Windows Media Servers (WMS) as proxy

    - by da_didi
    I will have one streaming-server (W2k8, unknown streaming protocol [rtsp, mss, http]) and half dozen streaming-servers as proxies to save bandwidth. I have read the documentation and installed the modules, but I am unsure how I have to configure the proxy's according to http://technet.microsoft.com/de-de/library/ee126142(en-us,WS.10).aspx - as a proxy or reverse proxy and how I minimize the bandwidth needs between origin server and proxy's. What is the best way to realize my setup? Any short how-tos? How can I announce all players to use the proxy? Route all rtsp/mms/http-requests through my proxy? Announce the proxy with DHCP-releases? Thanks!

    Read the article

  • python,running command line servers - they're not listening properly

    - by deepblue
    hello all Im attempting to start a server app (in erlang, opens ports and listens for http requests) via the command line using pexpect (or even directly using subprocess.Popen()). the app starts fine, logs (via pexpect) to the screen fine, I can interact with it as well via command line... the issue is that the servers wont listen for incoming requests. The app listens when I start it up manually, by typing commands in the command line. using subprocess/pexpect stops the app from listening somehow... when I start it manually "netstat -tlp" displays the app as listening, when I start it via python (subprocess/pexpect) netstat does not register the app... I have a feeling it has something to do with the environemnt, the way python forks things, etc. Any ideas? thank you

    Read the article

  • SQL Server 2005 script with join across Database Servers

    - by Robin Day
    I have the following script which I use to give me a simple "diff" between tables on two different databases. (Note: In reality my comparison is on a lot more than just an ID) SELECT MyTableA.MyId, MyTableB.MyId FROM MyDataBaseA..MyTable MyTableA FULL OUTER JOIN MyDataBaseB..MyTable MyTableB ON MyTableA.MyId = MyTableB.MyId WHERE MyTableA.MyId IS NULL OR MyTableB.MyId IS NULL I now need to run this script on two databases that exist on different servers. At the moment my solution is to backup the database from one server, restore it to the other and then run the script. I'm pretty sure this is possible, however, is this likely to be a can of worms? This is a very rare task I need to perform and if it involves a large number of DB setting changes then I will probably stick to my backup method.

    Read the article

  • Deploying a Rails App to Multiple Servers using Capistrano - Best Practices

    - by Louise
    I have a rails application that I need to deploy to 3 servers - machine1.com, machine2.com and machine3.com. I want to be able to deploy it to all machines at once and each machine individually. Can someone help me out with a skeleton Capistrano config file / recipe? Should it all be in deploy.rb or should I break it out in machine1.rb, etc? I thought I was on the right track getting Capistrano to take in command line arguments, but it choked when I tried set the roles within the namespaces. I'd pass in 'hosts=1,2,3' as an argument and set the role:app/web/db to "machine#{host}.com" after splitting on the command and going into an each do |host| {}... Anyway, other than creating 4 different deploy.rb files and renaming it before running cap:deploy each time, I'm stumped. I'd like to be able to do the following: cap deploy:machine1:latest_version_from_svn cap deploy:all_machines:latest:version_from_svn Just don't know if it should all be in deploy.rb split up with namespaces or if it should be broken into multiple deploy*.rb files.

    Read the article

  • PHP: Coding long-running scripts when servers impose an execution time limit

    - by thomasrutter
    FastCGI servers, for example, impose an execution time limit on PHP scripts which cannot be altered using set_time_limit() in PHP. IIS does this too I believe. I wrote an import script for a PHP application that works well under mod_php but fails under FastCGI (mod_fcgid) because the script is killed after a certain number of seconds. I don't yet know of a way of detecting what your time limit is in this case, and haven't decided how I'm going to get around it. Doing it in small chunks with redirects seems like one kludge, but how? What techniques would you use when coding a long-running task such as an import or export task, where an individual PHP script may be terminated by the server after a certain number of seconds? Please assume you're creating a portable script, so you don't necessarily know whether PHP will eventually be run under mod_php, FastCGI or IIS or whether a maximum execution time is enforced at the server level.

    Read the article

  • Haxe app and Gtk-WARNING on linux servers

    - by Cambiata
    Hi! I'm trying a Haxe-compiled solution called FAR (Flash Archiver) created by Edwin Van Rijkom (http://code.google.com/p/vanrijkom-flashlibs/) wich uses a command-line tool for creating compressed archives. When running the FAR tool locally on my ubuntu laptop, everything works fine. When running remotely (terminal as Root) on my Ubuntu and Debian servers, it gives the following error: Gtk-WARNING **: cannot open display: I've tried to reach Edvin about this, but no response so far. Maybe it has something to do with installation or user rights on the server? Any clue?

    Read the article

  • apache front end using mod_proxy_ajp to tomcat on different servers

    - by user302307
    Anyone knows the steps to run Apache on server A as front end and run mod_proxy_ajp to connect to tomcat instances on server B? I want to run apache on sever A to do name based vhost that connects to many tomcat servers. I can run mod_proxy_ajp, only if apache and tomcat are on the same server. What I've tried so far: In server A, running Apache 2.2: NameVirtualHost *:80 ServerName tc0.domo.lan ErrorLog "C:\Apache\Apache2.2\logs\tc0.ajp.error.log" CustomLog "C:\Apache\Apache2.2\logs\tc0.ajp.access.log" combined DocumentRoot C:/htdocs0 AddDefaultCharset Off Order deny,allow Allow from all ProxyPass / ajp://192.168.77.233:8009/ ProxyPassReverse / ajp://192.168.77.233:8009/ Options FollowSymLinks AllowOverride None Order deny,allow Allow from all Server B: 192.168.77.233, tomcat 6 connector: I can confirm if going to http://192.168.77.233:8080/manager/html, tomcat works. When I use packet sniffer on server A, I found that server A is trying to connect to server B at port 80 when I'm connecting http://tc0.domo.lan/manager/html on server A

    Read the article

  • Polling versus socket servers for online Flash games

    - by justin
    Hi, I want to make an online flash game, it will have social features but the gameplay will be primarily single-player. For example, no two players will appear on the screen at once, the social interaction will be through asynchronous messages, there won't be real-time chat or anything. Much of the logic would happen in the client, the server would validate the client logic, but it wouldn't need to be totally synchronous, which is why I'm thinking polling might be satisfactory. I have read in many places that socket servers can be more efficient than using polling for online games, but is that mainly a consideration for games that are more multi-player with more mult-player interactions than the game I have descriebed? If many users are playing online at the same time, but each playing a relatively isolated game, and not interacting to in real-time with each players, could polling be okay, or would using sockets be advisable no matter what if you have an online game that you envision many people playing at the same time? Thanks!

    Read the article

  • Pinging CS Servers

    - by Zubair1
    Hello, This has been bothering me for awhile, can some one show me how to ping a counter strike server. I just want to ping the server and see if it is online, thats all. I found many small snippets online that were using fsock and UDP to do this but none of them actually did the job i wanted it to do. Most of the ones i found were showing offline servers as online. I would really really appreciate if some one could provide me with this useful information (code). Thank you in advance ^_^

    Read the article

  • What is the best possible technology for pulling huge data from 4 remote servers

    - by Habib Ullah Bahar
    Hello, For one of our project, we need to pull huge real time stock data from 4 remote servers across two countries. The trivial process here, check the sources for a regular interval and save the update to database. But as these are real time stock data of more than 1000 companies, I have to pull every second, which isn't good in case of memory, bandwidth I think. Please give me suggestion on which technology/platform [We are flexible here. PHP, Python, Java, PERL - anyone of them will be OK for us] we should choose, it can be achieved easily and with better performance.

    Read the article

  • PHP Session code work differently on two servers

    - by williamsdb
    I have some code which works fine on one server but is giving a session header warning: Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent on another. I have checked the php.ini settings on the two servers and they are identical. I know that the warning message is supposed to suggest that something has been outputted before the session_start but what I don't understand is why the same code works on one server but not the other. Is there anything else that could be explaining it other than the php.ini settings?

    Read the article

  • cURL PHP Proper SSL between private servers with self-signed certificate

    - by PolishHurricane
    I originally had a connection between my 2 servers running with CURLOPT_SSL_VERIFYPEER set to "false" with no Common Name in the SSL cert to avoid errors. The following is the client code that connected to the server with the certificate: curl_setopt($ch,CURLOPT_SSL_VERIFYPEER,FALSE); curl_setopt($ch,CURLOPT_SSL_VERIFYHOST,2); However, I recently changed this code (set it to true) and specified the computers certificate in PEM format. curl_setopt($ch,CURLOPT_SSL_VERIFYPEER,TRUE); curl_setopt($ch,CURLOPT_SSL_VERIFYHOST,2); curl_setopt($ch,CURLOPT_CAINFO,getcwd().'/includes/hostcert/Hostname.crt'); This worked great on the local network from a test machine, as the certificate is signed with it's hostname for a CN. How can I setup the PHP code so it only trusts the hostname computer and maintains a secure connection. I'm well aware you can just set CURLOPT_SSL_VERIFYHOST to "0" or "1" and CURLOPT_SSL_VERIFYPEER to "false", but these are not valid solutions as they break the SSL security.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >