Search Results

Search found 17911 results on 717 pages for 'load balancing'.

Page 31/717 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Best pattern to load enumerated values from DAL using WCF RIA Services

    - by Dale Halliwell
    I would like to be able to load several RIA entitysets in a single call without chaining/nesting several small LoadOperations together so that they load sequentially. I have several pages that have a number of comboboxes on them. These comboboxes are populated with static values from a database (for example status values). Right now I preload these values in my VM by one method that strings together a series of LoadOperations for each type that I want to load. For example: public void LoadEnums() { context.Load(context.GetMyStatusValues1Query()).Completed += (s, e) => { this.StatusValues1 = context.StatusValues1; context.Load(context.GetMyStatusValues2()).Completed += (s1, e1) => { this.StatusValues2 = context.StatusValues2; context.Load(context.GetMyStatusValues3Query()).Completed += (s2, e2) => { this.StatusValues3 = context.StatusValues3; (....and so on) }; }; }; }; While this works fine, it seems a bit nasty. Also, I would like to know when the last loadoperation completes so that I can load whatever entity I want to work on after this, so that these enumerated values resolve properly in form elements like comboboxes and listboxes. (I think) I can't do this easily above without creating a delegate and calling that on the completion of the last loadoperation. So my question is: does anyone out there know a better pattern to use, ideally where I can load all my static entitysets in a single LoadOperation?

    Read the article

  • SQL Server 2008 CPU usage goes to 100% - troubleshooting help needed?

    - by Ali
    I have a fairly powerful database server having SQL Server 2008 R2 installed. There is only one database on it which is being accessed from 2 servers (around 5/6 applications). The problem is as soon as applications start pointing to the database, system's CPU usage goes upto 100% with sqlserver itself using 95+%. I have checked profiler, there aren't any heavy queries running there. I have checked active connections, they are hardly 150. Still CPU usage is around 100% and applications are experiencing slow response/connection to database server are getting refused. Database grus, I really need some ideas here. Thanks.

    Read the article

  • NLB RPC server is unavailable on the specified computer

    - by Robin Weston
    Hi guys, Firstly, I'll admit that my networking knowledge is limited so as people request more information I'll update this question accordingly. I am trying to create a NLB Cluster across 2 Windows Server 2008 Web Servers. Neither of the machines are members of a domain, and both have 2 NICs (one for processing external web traffic, and one for communicating internally). I have installed NLB on both machines, and have created a cluster on Host A and added itself to it. However, when I try and add Host B (using the address from the external NIC) I get the following error : "The RPC server is unavailable on the specified computer". On Host B I can see that the RPC service is running fine. I can also ping and RDP from Host A to Host B with no problems either. I have disabled the windows firewall on both machines but that had no effect

    Read the article

  • Free DNS software with failover support?

    - by Lin
    I'm looking for DNS software that can accomplish the following: Check health of all A records at set intervals If server is unresponsive after multiple successive checks, replace A record with a working server When a server is down, check it periodically. Once it's up, restore normal A records Here's an equivalent I thought of: Run DNS servers with very low TTL (minutes) Use a cron job to periodically query all webservers Use sed to replace A records if need be, and then restart DNS server I have a hard time believing there isn't already something that can accomplish the above. I'm not looking for a paid service, and I'm restricted to anything I can run with root access to a VPS. Any suggestions would be great. Thanks!

    Read the article

  • red black tree balancing?

    - by Anirudh Kaki
    i am working to generate tango tree, where i need to check whether every sub tree in tango is balanced or not. if its not balanced i need to make it balance? I trying so hard to make entire RB-tree balance but i not getting any proper logic so can any one help me out?? here i am adding code to check how to find my tree is balanced are not but when its not balanced how can i make it balance. static boolean verifyProperty5(rbnode n) { int left = 0, right = 0; if (n != null) { bh++; left = blackHeight(n.left, 0); right = blackHeight(n.right, 0); } if (left == right) { System.out.println("black height is :: " + bh); return true; } else { System.out.println("in balance"); return false; } } public static int blackHeight(rbnode root, int len) { bh = 0; blackHeight(root, path1, len); return bh; } private static void blackHeight(rbnode root, int path1[], int len) { if (root == null) return; if (root.color == "black"){ root.black_count = root.parent.black_count+1; } else{ root.black_count = root.parent.black_count; } if ((root.left == null) && (root.right == null)) { bh = root.black_count; } blackHeight(root.left, path1, len); blackHeight(root.right, path1, len); }

    Read the article

  • Can I configure mod_proxy to use different parameters based on HTTP Method?

    - by Graham Lea
    I'm using mod_proxy as a failover proxy with two balance members. While mod_proxy marks dead nodes as dead, it still routes one request per minute to each dead node and, if it's still dead, will either return 503 to the client (if maxattempts=0) or retry on another node (if it's 0). The backends are serving a REST web service. Currently I have set maxattempts=0 because I don't want to retry POSTs and DELETEs. This means that when one node is dead, each minute a random client will receive a 503. Unfortunately, most of our clients are interpreting codes like 503 as "everything is dead" rather than "that didnt work but please try that again". In order to program some kind of automatic retry for safe requests at the proxy layer, I'd like to configure mod_proxy to use maxattempts=1 for GET and HEAD requests and maxattempts=0 for all other HTTP Methods. Is this possible? (And how? :)

    Read the article

  • Is there a sensible way of 'teaming' two ADSL connections?

    - by Tim Long
    I work in an office complex that has two seperate ADSL connections, which they use to provide two seperate networks (actually both the ADSL routers go into a Cisco managed switch with two VLANs, one for each ADSL connection). Circumstances have changed so that 95% of the users are all on one ADSL connection. It would be great if there were a way to join together both connections to emulate a single connection at double the speed, but the ISP doesn't support bonding. So, is there a sensible way to take two completely seperate ADSL lines and use them to provide a single internet gateway?

    Read the article

  • Balancing a Binary Tree (AVL)

    - by Gustavo Carreno
    Ok, this is another one in the theory realm for the CS guys around. In the 90's I did fairly well in implementing BST's. The only thing I could bever get my head around was the intricacy of the algorithm to balance a Binary Tree (AVL). Can you guys help me on this?

    Read the article

  • haproxy access list using path_dir having issues with firefox

    - by user11243
    I'm trying to route all requests containing a path directory of /socket.io/ to a separate port with HAProxy. Here is my config file: global maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults mode http frontend all 0.0.0.0:80 timeout client 86400000 default_backend web_servers acl is_stream path_dir socket.io use_backend stream_servers if is_stream backend web_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 30000 timeout connect 4000 server web1 127.0.0.1:4000 weight 1 maxconn 1024 check backend stream_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 5000 timeout server 86400000 timeout connect 86400000 server stream1 127.0.0.1:5100 weight 1 maxconn 1024 check URL paths with a /socket.io/ get correctly directed to port 5100 in chrome and safari. However not for firefox. I'm running Haproxy locally on my mac for dev, not sure if it has anything to do with it. I'm using haproxy 1.4.8 and Firefox 3.6.15. I've tried clearing cache on firefox and it didn't help, so I'm thinking there's something wrong with the way HAProxy parses through the Firefox request headers.

    Read the article

  • Updated: NLB 2 Windows Server 2003 Servers - Looking to Hire SysAdmin to solve!

    - by Paul Hinett
    I need to configure windows NLB on 2 dedicated servers I have. My main machine has been running for some time, with several domain names pointing to the servers primary IP address. Both servers have 2 NIC's installed, and both have several secondary public IP addresses available if needed? What IP address would I use for the cluster IP, does this IP need to be added to the IP list of both public NIC's ip address list? What IP addresses do I use for the host's dedicated IP? Please help, this is driving me nuts...i've taken down the server twice on accident today! UPDATE: Looking to hire a windows SysAdmin to solve! I have updated my question, i would like to hire a trusted windows SysAdmin to take care of this for me, preferably today...can anyone help and provide some credentials please? Thank you in advance!

    Read the article

  • How to set up cluster with SESSION replication in Coldfusion 10?

    - by user3427540
    I am not able to set up a cluster with session replication. I have successfully set up a cluster with sticky session. When googled I found a lot of links explaining the same issue, like http://cfmlblog.adamcameron.me/2012/11/problem-with-session-replication-with.html https://forums.adobe.com/thread/1238702?start=0&tstart=0 Does deselecting the sticky session auto enables the session replication? But no where i got a solutions. Anyone solved this problem?

    Read the article

  • Balancing heuristics (for timetable problem)

    - by genesiss
    I'm writing a genetic algorithm for generating timetables. At the moment I'm using these two heuristics: Number of holes between lectures in one day (related) (less holes - bigger score) Each hour has some value, so for each timetable I sum values for hours when lectures are on. (lectures at more appropriate hours - bigger score) I want to balance these two heuristics, so the algorithm wouldn't favor neither one. What would be the best way to achieve this?

    Read the article

  • ASP.NET MVC multi-instance session management on amazon ec2

    - by gandil
    I have a web application written in asp.net mvc2. Currently hosted on amazon cloud ec2. Because of growing traffic we want move multi instance enviorenment. I have a custom session class which currently initiate at session start (global asax) and i am using via getter or setter class in application. Because of multi instance chore i have to handle hole security architecture. I am looking a better way to handle this problem. I am looking for good implementation of session and how to apply on amazon ec2 multi instance environment. What is road blocks for system architecture?

    Read the article

  • Ways to do simple failover with one server and two IPs

    - by CrassHoppr
    The setup is one server (Windows 2008) at one location with two incoming connections. As the server has to interface with various on-site devices, and will have a small number of incoming connections, a data center is not an option, and instead cable/dsl connections must be used. The goal is that users visit https://service.site.com and are sent to either the primary IP address or a secondary IP if the primary is down. I've seen advice to use round robin DNS for this, but caching an IP for a downed interface is something I'd like to avoid. Is something like this possible with these constraints?

    Read the article

  • Apache mod_proxy to another server

    - by trobrock
    I am using the proxy_balancer in Apache2 to proxy requests to a Rails application to my rails server on the port the application is running on. This is how its set up... Rails Server Mongrel running on port 8000, when accessing the url directly to http://rails_server:8000 the site loads fine Apache Server Conf file for the site: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName myserver.com ServerAlias application.myserver.com <Proxy balancer://application_cluster> Allow from localhost BalancerMember http://ip.to.server:8000 retry=10 </Proxy> ProxyPass / balancer://application_cluster </VirtualHost> The problem I am having is going to http://rails_server:8000 works fine, but going to http://application.myserver.com Loads the right content, but is displaying all the HTML as text and not rendering it as html

    Read the article

  • Wordpress Installation on Two Servers - Loadbalancing

    - by rihatum
    Hi All, I have to install wordpress (One Blog, one domain, for e.g. mycompany.com/blog) on two servers sharing one database on a different server, these two servers are behind a loadbalancer and the db would be on another server. We are planning this way due to high traffic. I have done standalone wordpress installations on a single server, on windows 2003, 2008 with IIS6, 7 etc I am just researching as to how would I implement this. What would be the steps to achieve this and upon searching I saw some posts regarding the wp-content/uploads directory to be synced at regular intervals ? your help much appreciated Thanks for reading

    Read the article

  • Server speed: sharing one script.php or using many copies the same script.php

    - by Marco Demaio
    Let's assume: I have thousands of domains on same Apache server. Each domain is in a folder under server public_html document folder, so it can be accessed by calling "www.somedomain.com" or by calling "www.serverdomain.com/somedomain_folder" In each domain there is a website who needs a certain script.php (identical for each domain). From a coding point view, its obvious that it's better to use a unique script.php, so when i update it with new features/bug fixes etc, I need to update on server only one file and it will work for all domains. But from a server point of view? If i use a unique script all domains will access it at the same time, will the server run slower compared to the situation where each domain called its own script?

    Read the article

  • BIG IP - HTTPS Health Monitor setup

    - by djo
    I have a Web site that we have setup a health monitoring pages so we can take our servers in and out of the Big-IP as we see fit. Now we have just moved onto Big-IP and the issue I have hit is that you setup Health Monitors for port 80 and 443, now the 80 check works fine but when I to get the 443 check to look at our file it fails. Now I am aware as I am hitting the this page on the IP address over HTTPS is going to cause a cert error but I would have guessed that BIG-Ip would have been setup just to accept the cert and carry on with the check. Is what I am wanting to do possible? Also is there a way of just using a HTTP monitor for HTTPS? Because if port 80 has stopped sending traffic then if i use the same monitor for 443 it will stop traffic to that. Any help would be great! Thanks

    Read the article

  • PHP on several servers with session-sharing

    - by Etu
    there's certanly other threads about this, but I have one more question. We are about to scale the website at work to have more than one server. And we need to share the sessions between the servers. We have been looking into different solutions, one in memcached and use Memcached as sessionhandler in PHP. That will probably work. And the idea would be to run memcached on every machine and let all webservers access all other servers memcached servers, and then we have shared sessions between the machines, yay. (we have no resources to setup with sticky-sessions yet, that's a later project. we need this running, and we need this running now. and we will loadbalance with DNS for a starter) But then... If I want to take one server down, say, for maintenance, or a server crashes, or whatever reason. I don't want the users to just loose their sessions and have to start from the beginning... That's why we need some kind of replication, which Memcached does not support. Then I found http://repcached.lab.klab.org/ -- which has multi-master replication of memcached, which is great, and is what I want. But does it work with 2 machines? Say 3, 5, 10? For future scaling. I also looked into redishttp://redis.io/ -- which also seems great, but is a bit more "shaky" with the php-session-handler support, and no multi-master-replication. The thing is that I like to use memcached, but I want to be able to power down one of two boxes without loosing half of the sessions. Any suggestions?

    Read the article

  • Generated images fail to load in browser

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • Apache mod_remoteip and access logs

    - by GioMac
    Since Apache 2.4 I've started using mod_remoteip instead of mod_extract_forwarded for rewriting client address from x-forwarded-for provided by frontend servers (varnish, squid, apache etc). So far everything works fine with the modules, i.e. php, cgi, wsgi etc... - client addresses are shown as they should be, but I couldn't write client address in access logs (%a, %h, %{c}a). No luck - I'm always getting 127.0.0.1 (localhost forward ex.). How to log client's ip address when using mod_remoteip?

    Read the article

  • How can one domain route to an always-changing pool of servers?

    - by ryeguy
    I'm sure this is an easy solution, I'm just not too familiar with how DNS works or if that's even related to this problem. If I'm running a web service on amazon ec2, distributed across many instances, how can I make it so a single domain name can be used to access the entire pool of servers, which will be changing from time to time? Since the instances may be present one second but gone the next (and vice versa), I need a way to randomly pick an active member of the cluster to route to. The updates would have to be instantaneous. Is this even possible, with dns caching and all?

    Read the article

  • Browser timing out attempting to load images

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >