Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 28/393 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Will spreading your servers load not just consume more recourses

    - by Saif Bechan
    I am running a heavy real-time updating website. The amount of recourses needed per user are quite high, ill give you an example. Setup Every visit The application is php/mysql so on every visit static and dynamic content is loaded. Recourses: apache,php,mysql Every second (no more than a second will just be too long) The website needs to be updated real-time so every second there is an ajax call thats updates the website. Recourses: jQuery,apache,php,mysql Avarage spending for single user (spending one minute and visited 3 pages) Apache: +/- 63 requests / responsess serving static and dynamic content (img,css,js,html) php: +/- 63 requests / responses mysql: +/- 63 requests / responses jquery: +/- 60 requests / responses Optimization I want to optimize this process, but I think that maybe it would be just the same in the end. Before implementing and testing (which will take weeks) I wanted to have some second opinions from you guys. Every visit I want to start off with having nginx in the front and work as a proxy to deliver the static content. Recources: Dynamic: apache,php,mysql Static: nginx This will spread the load on apache a lot. Every Second For the script that loads every second I want to set up Node.js server side javascript with nginx in te front. I want to set it up that jquery makes a request ones a minute, and node.js streams the data to the client every second. Recources: jQuery,nginx,node.js,mysql Avarage spending for single user (spending one minute and visited 3 pages) Nginx: 4 requests / responsess serving mostly static conetent(img,css,js) Apache: 3 requests only the pages php: 3 requests only the pages node.js: 1 request / 60 responses jquery: 1 request / 60 responses mysql: 63 requests / responses Optimization As you can see in the optimisation the load from Apache and PHP are lifted and places on nginx and node.js. These are known for there light footprint and good performance. But I am having my doubts, because there are still 2 programs extra loaded in the memory and they consume cpu. So it it better to have less programs that do the job, or more. Before I am going to spend a lot of time setting this up I would like to know if it will be worth the while.

    Read the article

  • openldap-servers-2.2.13-12.el4_8.2 RHEL 4 err=6

    - by coderwhiz
    I have been seeing these following error codes on our LDAP server: zgrep -o err=[0-9]* ldap.log.1.gz | sort | uniq -c 106664 err=0 146 err=16 288 err=4 29 err=49 8106 err=6 Can someone explain what err=6 is exactly and if its a big problem? I have been seeing lately some failures to authenticate and wonder if it is related to these errors? I have seen a possible timeout problem in the 2.2 Code base and not sure if theres a patch or if I would have to upgrade to the latest openldap version? thanks kosta

    Read the article

  • nginx: override global ssl directives for specific servers

    - by alkar
    In my configuration I have placed the ssl_* directives inside the http block and have been using a wildcard certificate certified by a custom CA without any problems. However, I now want to use a new certificate for a new subdomain (a server), that has been certified by a recognized CA. Let's say the TLD is blah.org. I want my custom certificate with CN *.blah.org to be used on all domains except for new.blah.org that will use its own certificate/key pair of files with CN new.blah.org. How would one do that? Adding new ssl_* directives inside the server block doesn't seem to override the global settings.

    Read the article

  • Django - Moving database from development to production servers

    - by Garfonzo
    I am working on a Django project with a MySQL backend. I'm curious about the best way to update a production server's database to reflect the changes made on the development server's database? When I develop now, I make some changes to a models.py file, then to a schemamigration using South. Sometimes I do several migrations across several apps within the main project folder before it's ready for the production database. This means that there are several migration files in the app/migrations/ folder created by South. So on the production server, how does one update the database to reflect all the changes made in development, without having any data loss?

    Read the article

  • Improve file transfer speed between Windows PCs and servers

    - by Geotarget
    I've setup a server which I've connected to multiple PCs in my workplace. Sadly, data transfer speeds are at max 3 MB/sec per connection which works out slow for file transfers, especially when transferring large files. I'm using Windows filesharing and the server is a Windows Server 2008 (2 Ghz CPU, 1 GB RAM) and the client PCs mostly running Windows 7. How can I detect bottlenecks in my network and improve file sharing speed within the network?

    Read the article

  • reverse proxying with NGINX to two back-end servers

    - by aag
    I am trying to learn how to configure the Nginx proxy. All requests from external (www.external.com) should go to internal server 10.10.10.16:2080, except for www.external.com/nagios requests, which should go to internal 10.10.10.18. My location block looks as follows: location ~* / { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Accept-Encoding ""; proxy_pass http://10.10.10.16:2080; } # # nagios server location ~* /nagios/ { proxy_buffers 16 4k; proxy_buffer_size 2k; proxy_buffering off; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header Accept-Encoding ""; proxy_pass http://10.10.10.18; } The first location seems to work fine. However, any request to www.external.com/nagios sends the browser into the eternal pastures. Of course, 10.10.10.18/nagios was tested and works fine. What am I missing?

    Read the article

  • Squid 2.7 in offline_mode yet tries to contact DNS servers to resolve addresses

    - by William C
    I installed Squid 2.7 to act as a web cache on my laptop, so that I can browse previously-visited sites when I don't have WiFi. Except http_access allow all, I've made no changes to the default squid.conf configuration. When I turn offline_mode ON and disconnect from the Internet, and then I visit sites, I encounter The following error was encountered: Unable to determine IP address from host name for whatever.sitename.com The dnsserver returned: Timeout on any site I visit. What settings do I need to add to squid.conf so I can browse sites offline?

    Read the article

  • SharePoint 2010 MySites - Host on separate servers

    - by Chris W
    We're playing with the SP 2010 Beta ahead of a planned deployment later this year in an academic environment. We anticipate that the majority of traffic will be through MySites when everything is provisioned so we're looking at how we can plan our SP topology to scale nicely. An initial thought is to run the main portal on one server, host "Student" MySites on one server and "Staff" on another. Is it acutally possible to do this easily or are we going down a bad path? Specifically - can we have 2 different MySites site collections, each hosted on a dedicated server? If so, can we configure SharePoint to work out from the users's logon account type of user they are and route them to the correct server?

    Read the article

  • Files copying between servers by creation time

    - by driftux
    My bash scripting knowledge is very weak that's why I'm asking help here. What is the most effective bash script according to performance to find and copy files from one LINUX server to another using specifications described below. I need to get a bash script which finds only new files created in server A in directories with name "Z" between interval from 0 to 10 minutes ago. Then transfer them to server B. I think it can be done by formatting a query and executing it for each founded new file "scp /X/Y.../Z/file root@hostname:/X/Y.../Z/" If script finds no such remote path on server B it will continue copying second file which directory exists. File should be copied with permissions, group, owner and creation time. X/Y... are various directories path. I want setup a cron job to execute this script every 10 minutes. So the performance is very important in this case. Thank you.

    Read the article

  • use Amazon EC2 tools in Capistrano to get the servers to push the code

    - by APZ
    I am trying to use EC2 tools to get all the machines with a particular tag in some type of array in /config/deploy/prod.rb file in Capistrano. Something like this: In prod.rb file: //untested command workers-array[]=$(ec2-describe-instances -F vpc-id=1234 -F tag:Env=prod -F tag:SystemType=worker) for(i=0;i<workers-array.len;i++){ role :worker-A, workers-array[i] } I am not sure how we can do this in capistrano, am new to ruby too. Guys any help on this would be really appreciated.

    Read the article

  • What are optimal strategies for using mapreduce and other applications on the same server?

    - by user45532
    I have two applications that I need to run continuously to process data. 1.) An app that processes and aggregates information from sources 2.) A mapreduce workflow* that processes the above info I've thought about either getting vps hosting or getting my own inexpensive server and using xen to split the resources of the server. Getting a quad core box with 2 GB of Ram seems a lot cheaper than the grid options I've seen at slicehost, rackspace and others...

    Read the article

  • Migrate servers and mailboxes?

    - by johnnietheblack
    I am moving a website from one hosting provider to another, and this of course means that I need to migrate all the mailboxes as well. Do I have to manually move all the old emails from one server to another, or will all the email clients "save" a copy of the old emails on each computer? UPDATE Pardon, the naivety - I hope this additional info helps: I'm doing this remotely, so I am not sure which email client(s) people will be using at the office, but I will need to be prepared for both POP and IMAP setups. Also, the server they currently have is on VPS.net (cPanel), and we are migrating to a MediaTemple Dedicated Virtual (Plesk). Both Linux.

    Read the article

  • Additional Hard Drives for Servers

    - by Abs
    Hello all, I am developing a web app where I will have to save lots of files and I am just trying to work out the directory structure and where things should be saved to. I have had a look at the dedicated server I want to buy and for storage it shows this: 2x 1TB SATA in RAID1 The space is enough but I am guessing this will not be on one hard drive? I will have to save files on one hard drive and when that fills up, I have to use the other? For the Fedora distro - what is the path for the second drive? Is there a primary drive where I will be able to setup my webroot? I am sorry, this is all new to me. It would be great to links and advice on how things actually work when it comes to additional hard drives etc. Thanks all

    Read the article

  • Any Windows based OpenID servers out there? [closed]

    - by Brian Knoblauch
    I've been looking to setup an OpenID server for a special project, but haven't found any workable OpenID server software packages. Originally was looking for a *nix solution, and found several, but they all had some kind of issue. So far I've tried JOIDS, community-id, and a couple others I unfortunately can't remember the names of. I've also come to the conclusion that even if I had managed to get one of those going that the management/upgrade cycles would have placed undue burden on the company (only a couple part time sysadmins with *nix knowledge, the day to day people are primarily Windows). So, I'm hoping that there's a Windows one out that will be functional that someone knows about and will be easy for a minimal support environment...

    Read the article

  • Forensics on Virtual Private servers [closed]

    - by intiha
    So these days with talks about having hacked machines being used for malware spreading and botnet C&C, the one issue that is not clear to me is what do the law enforcement agencies do once they have identified a server as being a source or controller of attack/APT and that server is a VPS on my cluster/datacenter? Do they take away the entire machine? This option seems to have a lot of collateral damage associated with it, so I am not sure what happens and what are the best practices for system admins for helping law enforcement with its job while keeping our jobs!

    Read the article

  • How to share datastores between multiple exchange servers?

    - by Johan
    I have an Exchange 2003 box that is seriously overstressed. I want to transfer its duties to a new and faster box. I don't cannot suffer downtime, so I have to do this stuff live. Here's what I plan to do: Install Exchange 2003 on the new server Set up the new server, so it will accept requests from users for their mailboxes I want to do as little manual set up as possible, because that 'll eat up my time and is too error prone Than I want to transfer my datastores one by one to the new server and have those users (once the datastore in the new server is up and running) to get their data from the new server (without them noticing) I don't have to transfer all the datastores, some of them need to stay on the old box (because I'm still waiting for extra HD space to arrive from the supplier) What steps do I need to follow to do this? The new box has never seen this domain before, the old exchange server is not the DC, we have a dedicated DC.

    Read the article

  • How can i simulate the production servers in my home for linux VMs [closed]

    - by user31
    I am thinking of making the small simulation of how the big companies run their system in my home environment to get the feeling. I have the server with 8GB ram , quad core processor. I am thinking of following setup if thats [possible because i have not worked with biger companies , so i want to know how can i do that I am thinking of creating 5 virtual machines VM1 will be database server and will have all databases like MySQL , postgreSQL , sqlite , mongodb and Oracle VM2 will be the web server and will have Apache and Tomcat installed VM3 will be the Filse server where i will have all the web sites file VM4 , i am thinking of as main box where i can install ptyon php java j2ee sites but not sure VM5 will have the server 22008 for c# .net applications my main idea is to be able to host the sites in php, python , java j2ee with spring Is my setup ok or i am missing few things. Please guide me with correct setup so that i can learn stuff

    Read the article

  • Preventing back connect in Cpanel servers

    - by Fernando
    We run a Cpanel server and someone gained access to almost all accounts using the following steps: 1) Gained access to an user account due to weak password. Note: this user didn't had shell access. 2) With this user account, he accessed Cpanel and added a cron task. The cron task was a perl script that connected to his IP and he was able to send back shell commands. 3) Having a non jailed shell, he was able to change content of most websites in server specially for users who set their folders to 777 ( Unfortunately a common recommendation and sometimes a requirement for some PHP softwares ). Is there a way to prevent this? We started by disabling cron in Cpanel interface, but this is not enough. I see a lot of other options in which an user could run this perl script. We have a firewall running and blocking uncommon outgoing ports. But he used port 80 and, well, I can't block this port as a lot of processes use them to access things, even Cpanel itself.

    Read the article

  • Google Apps routing to different servers, depending on domain

    - by Philip
    We are investigating Google Apps for Education for our group of schools. Currently, each school uses their own Exchange (2003) server. Each school has its own domain which I have added to Google Apps as additional domains. I would like to start transitioning certain staff and some new pupils over to Google Apps to start testing. In this interim phase, I need mail to be routed through Google Apps and then, if no appropriate mail box is found, route on to the individual schools depending on the recipient. I do know that it is possible to route mail that does not have an appropriate Google Apps mail account to a single server - under "Settings / E-mail Settings / General Settings / Routing / E-mail routing". This works well for a single organisation where all the extra mail is destined for one place. I do know that it is possible to set up Routes, under "Settings / E-mail Settings / Hosts" and then use rules, found under "Settigns / E-mail Settings / General Settings / Routing / Receiving Routing". I can then filter based on e-mail domain and forward on to the necessary server. My problem with this, as I understand it, is that it ignores the users that have Google Apps accounts set up and sends all mail to the Exchange server. Are there any solutions for this predicament? Many thanks!

    Read the article

  • Always getting below error in some of my Web Servers:

    - by Vijay
    I am getting below error in some of my webserver. I don't know what is happening in my server, whether this is SQL DB related or Web server related. Please help me how to trouble shoot. Message::Save- Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Norman.Message.Save(Int32 nSiteID, String sBody, Int32 nUserID, String sUserIP)

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >