Search Results

Search found 22993 results on 920 pages for 'global load balancing'.

Page 341/920 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • Interrupted system call during "hg convert"

    - by Aaron Digulla
    When I run "hg convert" to convert a Subversion repository to Mercurial, I get this error: fetching revision log for "/trunk" from 1538 to 0 run hg sink post-conversion action Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 46, in _runcatch return _dispatch(ui, args) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 454, in _dispatch return runcommand(lui, repo, cmd, fullargs, ui, options, d) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 324, in runcommand ret = _runcommand(ui, options, cmd, d) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 505, in _runcommand return checkargs() File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 459, in checkargs return cmdfunc() File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 453, in <lambda> d = lambda: util.checksignature(func)(ui, *args, **cmdoptions) File "/usr/lib/pymodules/python2.6/mercurial/util.py", line 386, in check return func(*args, **kwargs) File "/usr/lib/pymodules/python2.6/hgext/convert/__init__.py", line 229, in convert return convcmd.convert(ui, src, dest, revmapfile, **opts) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 398, in convert c.convert(sortmode) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 312, in convert parents = self.walktree(heads) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 109, in walktree commit = self.cachecommit(n) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 267, in cachecommit commit = self.source.getcommit(rev) File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 433, in getcommit self._fetch_revisions(revnum, stop) File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 814, in _fetch_revisions for entry in stream: File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 122, in __iter__ entry = pickle.load(self._stdout) IOError: [Errno 4] Interrupted system call abort: Interrupted system call Apparently, it is possible to restart a read on EINTR but how would I do that with pickle.load()? Also I wonder where that signal comes from? I suspect it's SIGCHILD but shouldn't popen() handle that?

    Read the article

  • ESX Scheduler and NUMA issue

    - by babyg_wc
    On our 24 core bl685 (4sockets x 6core), we find that NUMA nodes 0 and 1 are pretty busy (unfortunately resulting in elevated cpu ready times on the VMS), whilst NUMA nodes 2 and 3 are almost unused. I thought this just maybe a ESX4 U1 issue, so I had a colleague with a 32 core (dl785) farm investigate, and it seems that his last 3 or 4 NUMA nodes are also not really being utilised. ESX seems to have a weakness when it comes to balancing lightly loaded NUMA boxes, Im going to enabled node interleaving in the BIOS and see if the scheduler balancers across all 24 cores, instead of just 12!... For those of you with large core counts, I would suggest you fire up you viclient, and check Physical CPU useage (or esxtop), I would be interested to hear what your results are. Please note, that its only the lightly loaded (eg less than 30% cpu load on the esx host) that seems to have the biggest issue with load imbalance. Thoughts/comments. PS ive logged a SR with vmware to assist, also the other "problem" could be that we have 128gb of ram in each host, and therefore the scheduler sees no good reason why it shouldnt try and cram all vms's into the first two NUMA nodes, as we only have around 50gb of ram worth of vms on each host...

    Read the article

  • Where in the stack is Software Restriction Policies implemented?

    - by Knox
    I am a big fan of Software Restriction Policies for Microsoft Windows and was recently updating our settings for this. I became curious as to where Microsoft implemented this technology in the stack. I can imagine a very naive implementation being in Windows Explorer where when you double click on an exe or other blocked file type, that Explorer would check against the policy. I call this naive because obviously this wouldn't protect against someone typing something in a CMD window. Or worse, Adobe Reader running an external application. On the other hand, I can imagine that software restriction policies could be implemented deep in the stack almost at the metal. In this case, the low level loader would load into memory the questionable file, but mark the memory in the memory manager as non-executable data. I'm pretty sure that Microsoft did not do the most naive implementation, because if I block Java using a path block, Internet Explorer will crash if it attempts to load Java. Which is what I want. But I'm not sure how deep in the stack it's implemented and any insight would be appreciated.

    Read the article

  • Windows XP can use a wired network port, but MacBook (OS X) fails on the same port

    - by Dean Hill
    I wired the Cat5 in my house seven years ago. The wired ports have worked fine with both my Windows XP laptop and MacBook. My wireless network also works fine, but I like to use wired occasionally. One of the Cat5 runs wasn't terminated with a jack, so I recently terminated this wire with a port/jack on the wall end and a standard Cat5 plug on the end that plugs into my router. This is the same setup as my other runs. Unfortunately, the MacBook isn't working well with the new wired port. The OS X Network System Preferences show the IP, Subnet, Router, etc., and everything looks fine. A "netstat -ibd" shows no errors or dropped packets. However, when I open a page in Safari, the status says "Contacting 'www.google.com'" and appears to hang. If I wait for a couple minutes, part of the Google page starts to display, but it is still not the full page load. When I use a Windows XP laptop on the same wired port, everything works fine. An internet speed test shows good results and all web pages load fine. A "netstat -e" under Windows shows no errors. I've used a Cat5 tester, and the cable tests fine (wires 1-8 light up in sequence). I've replaced both the port/jack and the connector twice to make sure I wired things correctly. I'd really like this Cat5 to work with the MacBook (and I'm trying to avoid running a new length of cable). Any ideas what the problem could be?

    Read the article

  • How to restrict postfix send limited email with policyd v2?

    - by Shalini Tripathi
    I have installed cluebringer-2.0.7 for postfix and enabled below lines in the main.cf file of postfix. But I could not see any policy working smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination check_policy_service inet:127.0.0.1:10031 smtpd_end_of_data_restrictions=check_policy_service inet:127.0.0.1:10031 To check further I enabled logging in policyd and its only shows below logs and there is no logs getting populated when I send new emails.. [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: Process Backgrounded [2012/06/12-21:18:50 - 13949] [CBPOLICYD] NOTICE: Policyd v2 / Cluebringer - v2.0.7 [2012/06/12-21:18:50 - 13949] [CBPOLICYD] NOTICE: Initializing system modules. [2012/06/12-21:18:50 - 13949] [CBPOLICYD] NOTICE: System modules initialized. [2012/06/12-21:18:50 - 13949] [CBPOLICYD] NOTICE: Module load started... [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = AccessControl: enabled [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = CheckHelo: enabled [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = CheckSPF: enabled [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = Greylisting: enabled [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = Quotas: enabled [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = Protocol(Postfix): enabled [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = Protocol(Bizanga): enabled [2012/06/12-21:18:50 - 13949] [CBPOLICYD] NOTICE: Module load done. [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: 2012/06/12-21:18:50 cbp (type Net::Server::PreFork) starting! pid(13949) [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: Binding to TCP port 10031 on host * [2012/06/12-21:18:50 - 13949] [CORE] WARNING: Group Not Defined. Defaulting to EGID '0 10 6 4 3 2 1 0' [2012/06/12-21:18:50 - 13949] [CORE] WARNING: User Not Defined. Defaulting to EUID '0' Do I need to do anymore settings for postfix to listen on policyd???Please help

    Read the article

  • Dual boot windows 8 pro and windows 7 on XPS 8500 Special Edition

    - by Jesse
    I am trying to install a dual boot with windows 7 premium and windows 8 Pro on an XPS 8500 special edition. I created a new primary partition on my C: drive, inserted the windows 8 install disk, and rebooted my computer from DVD. I select custom install and the dialog box saying "Where do you want to install windows at?" pops up but none of my drives are listed. Please help me determine what is going on. I don't understand why none of my drives are showing up on this menu. Not even the original drive. When I go to load driver and click on the partition I created it tells me "No signed device drivers were found. Make sure the installation media contains the correct drivers, and then click OK." resolved above issue by running setup from the source folder on the install disk instead of booting from DVD. Was able to locate my new partition and start install. It completes the first step of "Copying windows files" just fine but then on the next step "Getting files ready for installation" my computer restarts and attempts to load windows 8 but keeps telling me my pc needs to restart. This keeps going on in an infinite boot loop. Please help, this has been a nightmare!

    Read the article

  • Why is the DNS on my Windows Server 2012 not authoritative according to dig?

    - by tetranz
    This is me trying to understand something rather than a real problem. I have a new Windows Server 2012 Essentials. That server provides, DNS, DHCP etc. Lets say my Windows domain is my-windows-domain and the server's host name is my-server. The domain's DNS zone is my-windows-domain.local. The server's IP address is 192.168.1.5. This is what I get if I go to a Linux machine on our LAN and do dig my-server.my-windows-domain @192.168.1.5 ; <<>> DiG 9.9.5-3-Ubuntu <<>> my-server.my-windows-domain.local @192.168.1.5 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6003 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4000 ;; QUESTION SECTION: ;my-server.my-windows-domain.local. IN A ;; ANSWER SECTION: my-server.my-windows-domain.local. 3600 IN A 192.168.1.5 ;; Query time: 0 msec ;; SERVER: 192.168.1.5#53(192.168.1.5) ;; WHEN: Wed Jun 11 10:44:28 EDT 2014 ;; MSG SIZE rcvd: 73 I think that all looks okay except why is it AUTHORITY: 0 ? Shouldn't this be the authority for the my-windows-server.local domain? dig soa my-windows-domain.local comes back with: ; <<>> DiG 9.9.5-3-Ubuntu <<>> soa my-windows-domain.local ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29822 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4000 ;; QUESTION SECTION: ;my-windows-domain.local. IN SOA ;; ANSWER SECTION: my-windows-domain.local. 3600 IN SOA my-server.my-windows-domain.local. hostmaster.my-windows-domain.local. 101 900 600 86400 3600 ;; ADDITIONAL SECTION: my-server.my-windows-domain.local. 3600 IN A 192.168.1.5 ;; Query time: 1 msec ;; SERVER: 192.168.1.5#53(192.168.1.5) ;; WHEN: Wed Jun 11 10:51:17 EDT 2014 ;; MSG SIZE rcvd: 120 I know about the recommendation to not use .local but there was no other option when I installed the server, just following the wizards.

    Read the article

  • 3 Servers, is this is a cluster?

    - by Andy Barlow
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Nginx multiple upstream servers on the same domain via diferent url

    - by Barry
    Hello. I am trying to rout trafic to different upstream servers (that serve different applications and not for load balancing). The incoming trafic has the same domain name but different URL. Here is an example of my configuration: http { upstream backend1 { server 127.0.0.1:8080 fail_timeout=0; server 127.0.0.1:8081 fail_timeout=0; } upstream backend2 { server 127.0.0.1:8090 fail_timeout=0; server 127.0.0.1:8091 fail_timeout=0; } server { listen 80; server_name my_server.com; root /home/my_server; location /serve_me { fastcgi_pass backend1; include fastcgi_params; } location / { fastcgi_pass backend2; include fastcgi_params; } } } It seems that whatever trafic comes in (including "my_server.com/serve_me") goes to backend2. How do I make queries that start with /serve_me to be directed to backend1? Thanks, Barry.

    Read the article

  • uWSGI cannot find "application" using Flask and Virtualenv

    - by skyler
    Using uWSGI to serve a simple wsgi app, (a simple "Hello, World") my configuration works, but when I try to run a Flask app, I get this in uWSGI's error logs: current working directory: /opt/python-env/coefficient/lib/python2.6/site-packages writing pidfile to /var/run/uwsgi.pid detected binary path: /opt/uwsgi/uwsgi setuid() to 497 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes uwsgi socket 0 bound to TCP address 127.0.0.1:3031 fd 3 Python version: 2.6.6 (r266:84292, Jun 18 2012, 14:18:47) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] Set PythonHome to /opt/python-env/coefficient/ *** Python threads support is disabled. You can enable it with --enable-threads *** Python main interpreter initialized at 0xbed3b0 your server socket listen backlog is limited to 100 connections *** Operational MODE: single process *** added /opt/python-env/coefficient/lib/python2.6/site-packages/ to pythonpath. unable to find "application" callable in file /var/www/coefficient/flask.py unable to load app 0 (mountpoint='') (callable not found or import error) *** no app loaded. going in full dynamic mode *** *** uWSGI is running in multiple interpreter mode ***` Note in particular this part of the log: unable to find "application" callable in file /var/www/coefficient/flask.py unable to load app 0 (mountpoint='') (callable not found or import error) **no app loaded. going in full dynamic mode** This is my Flask app: from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello, World, from Flask!" Before I added my Virtualenv's pythonpath to my configuration file, I was getting an ImportError for Flask. I solved this though, I believe (I'm not receiving errors about it anymore) and here is my complete configuration file: uwsgi: #socket: /tmp/uwsgi.sock socket: 127.0.0.1:3031 daemonize: /var/log/uwsgi.log pidfile: /var/run/uwsgi.pid master: true vacuum: true #wsgi-file: /var/www/coefficient/coefficient.py wsgi-file: /var/www/coefficient/flask.py processes: 1 virtualenv: /opt/python-env/coefficient/ pythonpath: /opt/python-env/coefficient/lib/python2.6/site-packages This is how I start uWSGI, from an rc script: /opt/uwsgi/uwsgi --yaml /etc/uwsgi/conf.yaml --uid uwsgi And if I try to view the Flask program in a browser, I get this: **uWSGI Error** Python application not found Any help is appreciated.

    Read the article

  • Win2k8R2 / IIS 7.5 - users getting 503 response, no 503 error reported in logs

    - by merk
    I've got 2 web servers with mirrored content. There's a load balancer sitting in front of them. Starting yesterday we've been getting people complaining about 503 errors. i can't find any 503 errors in the IIS log file. However the server host is saying these errors are due to .Net errors in our website which are causing the app pool to recycle. They pointed out several errors in the windows application event log which look like this: Log Name: Application Source: ASP.NET 4.0.30319.0 Date: 3/31/2012 8:35:37 PM Event ID: 1309 Task Category: Web Event Level: Warning Keywords: Classic User: N/A Computer: 6251.local Description: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 3/31/2012 8:35:37 PM Event time (UTC): 4/1/2012 1:35:37 AM Event ID: e7a580c7b38545cca3416a8595408f24 Event sequence: 97 Event occurrence: 1 Event detail code: 0 Application information: Application domain: /LM/W3SVC/2/ROOT-1-129777167518960645 Trust level: Full Application Virtual Path: / Application Path: C:\inetpub\wwwroot\mywebsite\ Machine name: 6252 Process information: Process ID: 20000 Process name: w3wp.exe Account name: IIS APPPOOL\MyAppPool In particular they are saying that the account name under Process Information indicates that the app pool is recycling. They said if the app pool were not recycle, the accountname would instead be the folder where the website files are located. I checked the app pool settings - it's set to recycle every 29 hours. And the rapid fail protection is set to the default of 5 failures in 5 minutes. But i have not seen 5 failures in the event log in that short of a time span. Can anyone help me confirm if the 503 responses are indeed being generated by the app pools recycling? Or are these errors coming from somewhere else? My guess at the time was their load balancer was the one actually returning the 503 error. But that was just a guess.

    Read the article

  • multi-user rvm gem install failure when called from CloudFormation::Init

    - by Peter Mounce
    I've taken an Amazon Linux AMI (based on CentOS) and installed RVM (1.10.3) to it in multi-user fashion (see {1} below). I used that to install ruby 1.9.3-p125, rubygems 1.8.17, and bundler 1.1 as the baseline requirements for most things I'm going to be using the instances for. I've captured that instance to an AMI, and am now launching it via CloudFormation, with some CloudFormation::Init commands. One of them is to use s3cmd to pull down a private gem from S3, and the next one, the one that fails, is to install that gem. It fails with an error message 2012-03-15 16:53:20,201 [ERROR] Command 20_install_gems (/usr/local/rvm/rubies/ruby-1.9.3-p125/bin/gem install ./*.gem) failed 2012-03-15 16:53:20,202 [DEBUG] Command 20_install_gems output: /usr/local/rvm/rubies/ruby-1.9.3-p125/bin/gem:12:in `require': no such file to load -- rubygems (LoadError) from /usr/local/rvm/rubies/ruby-1.9.3-p125/bin/gem:12 Now, that happens during the cfn-init execution - I assume, but haven't checked yet, that cfn-init is being run with an environment different from that of ec2-user (there are no other users on the instance). If I run gem install mygem.gem in an interactive session then that works fine. So, my question really, is what should I do to make this work for cfn-init? Have I correctly set up rvm as multi-user? I've confirmed that cfn-init is being run as the root user, with his restricted environment. How should I source the /etc/profile.d/rvm.sh into root's sessions? {1} My semi-automated rvm installation steps (run in interactive session as ec2-user): sudo bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer ) sudo gpasswd -a ec2-user rvm # iconv-devel is baked into centos' glibc sudo yum install -y autoconf automake bison bzip2 gcc-c++ git libffi-devel libtool libxml2-devel libxslt-devel libyaml-devel make openssl-devel patch readline readline-devel zlib zlib-devel source /etc/profile.d/rvm.sh rvm list known # in a new session: rvm install ruby-1.9.3-p125 rvm use 1.9.3 --default gem update --system # gems required by public_web-awareness gem install aws-sdk bundler cocaine sinatra echo -e "gem: --no-ri --no-rdoc\n" > /home/ec2-user/.gemrc # delete unnecessary documentation files rm -rf `gem env gemdir`/doc sudo -s sudo echo -e "gem: --no-ri --no-rdoc\n" > /etc/skel/.gemrc sudo echo -e "gem: --no-ri --no-rdoc\n" > /etc/gemrc # ctrl + d out of the sudo session Some environment information: [ec2-user@ip ~]$ echo $PATH /usr/local/rvm/gems/ruby-1.9.3-p125/bin:/usr/local/rvm/gems/ruby-1.9.3-p125@global/bin:/usr/local/rvm/rubies/ruby-1.9.3-p125/bin:/usr/local/rvm/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/aws/bin:/home/ec2-user/bin [ec2-user@ip ~]$ echo $GEM_HOME /usr/local/rvm/gems/ruby-1.9.3-p125 [ec2-user@ip ~]$ echo $GEM_PATH /usr/local/rvm/gems/ruby-1.9.3-p125:/usr/local/rvm/gems/ruby-1.9.3-p125@global [ec2-user@ip ~]$ echo $BUNDLE_PATH [ec2-user@ip ~]$ gem list *** LOCAL GEMS *** aws-sdk (1.3.6) bundler (1.1.0) cocaine (0.2.1) httparty (0.8.1) json (1.6.5) multi_json (1.1.0) multi_xml (0.4.1) nokogiri (1.5.1, 1.5.0) rack (1.4.1) rack-protection (1.2.0) rake (0.9.2) sinatra (1.3.2) tilt (1.3.3) uuidtools (2.1.2) yamler (0.1.0)

    Read the article

  • ajax.googleapis.com stopping my Firefox

    - by Oscar Reyes
    Today for some strange reason, Firefox stops working properly because it is trying to fetch something from ajax.googleapis.com. Is there something I can do to avoid this? Safari and Chrome work just fine. I tried uninstalling Firebug and clearing the cache. The only thing that worked was disabling the JavaScript altogether. This seems to be the culprit link: http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js What can I do? EDIT I think I have found where the problem is. My proxy is serving one byte at a time the file, so firefox consume it at that peace. What I don't understand is why Safari and Chrome takes it right away. What I did last night was, leave the FF open all the night to give him change to load the file, my hope was that I got cached and the next time there was no need to go for it. Today in the morning, the page load successfully but the page was not cached, because the next request failed the same. Here's a video showing the problem:

    Read the article

  • Apache2 slow serving static while healthy

    - by user45339
    My Apache status looks like; 201 requests/sec - 98.8 kB/second - 504 B/request 85 requests currently being processed, 345 idle workers _____CCW_C_____C__C__C_R____C_WC_________C__C____CW__C__CCC_____ __C____W______C___C___CW__C_C______C__W_C__C_____CCC____C______R CC_C_______C___C____C______________C______C__C________________C_ ___________________C______________________C_______C___C_____C___ CC____C__C___R_____C_C_CC__________C___C___________R____C_C_C___ ______C______W_W__W___C____________________C__WCC__R__R_C_______ R__RC________________________C___R____W__C____.................. .................................................... Server load is average 2 on a 4 core machine. IO utilization is 10-15% and doesn't have many jumps over 70%. Machine has almost 4 gb free and uses 0 swap. The site on the machine is a PHP site. All PHP code is optimized and fast mostly when it gets accessed, however sometimes requests get stuck. Stuck meaning; no response for at least 10 sec. We debugged the PHP code, but it is quite optimal and fast. We spend a lot of time on it until we decided to test the requesting of: <html><body>test</body></html> test.html page. This static resource also gets 'stuck' in the same manner the php pages get 'stuck'. How is the possible given the health of the system? I tested the network, but, when the PHP shows 'slowness' in the site monitoring, the html test files also take (far longer) than 10 sec to load using; time lynx -dump http://127.0.0.1/test.html We are kind of desperate to solve this problem, but we cannot seem to tackle it.

    Read the article

  • Router recommendation to virtualize 800 IPs

    - by delerious010
    I've recently been looking at getting some new load balancers for our environment as we are expecting to double our client base in the next 12 months. Currently we have 400 public IPS serving 800 clusters ( 2 clusters / IP due to ports ) on Coyote Point Balancers, and distributing connections to 3 web servers serving about 6GBytes outgoing, 2Gbytes in per day. If we double, this would be about 800 IPs, possibly 1600 clusters, and about 6 servers per cluster ( for a total of 9600 so called "real servers" using Barracuda's lingo ). Due to the amount of clusters, most solutions I've looked at ( Coyote, Barracuda, Loadbalancer.org ) seem to be unsure whether they'll be able to handle our planned growth, mostly due to health checks performed on the servers ... which makes total sense when you think of it. So the fine folk at loadbalancer.org recommended that we may be better off offload the 400-800 public IPs, which we require for SSL eCommerce solutions, over to a forward facing router. From that point on, the router could do some mangling to route EXT_IP:443 to INT_IP:INT_PORT which would then allow us to reduce the Load Balancer configuration to 1 or 2 clusters, thus resolving the health check problem. Does this idea make sense to yall ? Or would you have other recommendations to make ? Secondly, what router would you recommend for such an undertaking ? I'd be looking at something that has some form of failover mechanism built in. On a totally unrelated note, I've got to admit that I'm extremely pleased with the responses I got from loadbalancer.org. Their responses to my inquiries were surprisingly helpful ( i.e. I didn't feel as if I was taking to a sales guy trying to push something ). ( No I don't work for them, and sadly nor are they sending me free gear ).

    Read the article

  • All PHP sites stopped working on IIS7, internal server error 500

    - by TimothyP
    I installed multiple drupal 7 sites using the Web Platform Installer on Windows Server 2008. Until know they worked without any problems, but recently internal server error 500 started to show up (once every so many requests), now it happens for all requests to any of the php sites. There's not much detail to go on, and nothing changed between the time when it was working and now (well nothing I know of anyway) The log file is flooded with messages such as [09-Aug-2011 09:08:04] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:16] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:16] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:20] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:22] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:08:51] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:09:56] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:09:57] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:12:13] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:15:09] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:15:09] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:21:28] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 [09-Aug-2011 09:21:28] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 I have tried increasing the memory limit in php.ini as such: memory_limit = 512MB But that doesn't seem to solve the problem either. This is in the global php configuration in IIS When I looked at the sites one by one, I noticed that PHP seemed to have been disabled. PHP is not enabled. Register new PHP version to enable PHP via FastCGI So I tried to register the php version again C:\Program Files\PHP\v5.3\php-cgi.exe But when I try to apply the changes I get There was an error while performing this operation Details: Operation is not valid due to the current state of the object There doesn't seem to be any other information than that. I have no idea why all of a sudden php isn't available for the sites anymore. PS: I have rebooted IIS, the server, etc... This server is hosted on amazon S3, so I gave the server some more power Update These seem to be two different issues I used memory_limit=128MB instead of memory_limit=128M Notice the "M" instead of "MB" A memory_limit of 128M was not enough, had to increase it to 512M The first issue caused internal server errors for every request. Increasing to 512MB seemed to have solved the problem for a little while, but after a while the server errors return. Note that the PHP manager inside of IIS still shows there is no PHP available for the sites (the global config does see it as available) So the problem remains unsolved

    Read the article

  • Regular Windows 7 BSOD with Shrew VPN client

    - by Junto
    The Shrew VPN client appears to be a good alternative to the Cisco VPN Client on x64 Windows 7. However, since installing it I've seen fairly regular BSODs. Minidump attached: Microsoft (R) Windows Debugger Version 6.11.0001.404 AMD64 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [D:\Temp\020810-23431-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*d:\symbols*http://msdl.microsoft.com/download/symbols Executable search path is: Windows 7 Kernel Version 7600 MP (8 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Built by: 7600.16385.amd64fre.win7_rtm.090713-1255 Machine Name: Kernel base = 0xfffff800`0285f000 PsLoadedModuleList = 0xfffff800`02a9ce50 Debug session time: Mon Feb 8 18:08:12.887 2010 (GMT+1) System Uptime: 0 days 7:52:06.120 Loading Kernel Symbols ............................................................... ................................................................ .............. Loading User Symbols Loading unloaded module list .... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck A, {0, 2, 0, fffff800028d50b6} Unable to load image \SystemRoot\system32\DRIVERS\vfilter.sys, Win32 error 0n2 *** WARNING: Unable to verify timestamp for vfilter.sys *** ERROR: Module load completed but symbols could not be loaded for vfilter.sys Probably caused by : vfilter.sys ( vfilter+29a6 ) Followup: MachineOwner Machine is a brand spanking new Dell Precision T5500. Superuser appears to have several recommendations for the Shrew VPN Client as an alternative to the Cisco VPN client on 64 bit machines, so I wondered if anyone here has seen this problem and possibly found a solution to the problem? I've decided to run the VPN client under Windows 7 compatibility mode for the moment (Vista SP2) with administrator privileges to see if it makes a difference. Oddly, the VPN doesn't necessarily need to be connected. I've noticed it when browsing using Google Chrome and Internet Explorer, usually when I open up a new tab. If it carries on I think I'll be forced to shell out the 120 EUR for the NCP Client instead.

    Read the article

  • Unable to understand why Alfresco doesn't start on Tomcat

    - by Infernalsirius
    Hi all, I have a problem that I've been inspecting for a while now, googling and everything but could not begin to understand. I'm really not used to java, even less tomcat. So there it is. First, the setup. Centos 5.3 on a virtualized server. Bitnami Native Alfresco stack (tomcat5.5, mysql5, java, javajdk, JDBC) Content of catalina.log. Since it's the shortest and where I found my first clue to what is going wrong: SEVERE: Error listenerStart Aug 27, 2009 5:32:58 PM org.apache.coyote.http11.Http11BaseProtocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Aug 27, 2009 5:32:58 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 229 ms Aug 27, 2009 5:32:58 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Aug 27, 2009 5:32:58 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/5.5.25 Aug 27, 2009 5:32:58 PM org.apache.catalina.core.StandardHost start INFO: XML validation disabled Aug 27, 2009 5:34:47 PM org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart Aug 27, 2009 5:34:47 PM org.apache.catalina.core.StandardContext start SEVERE: Context [/alfresco] startup failed due to previous errors Aug 27, 2009 5:34:48 PM org.apache.coyote.http11.Http11BaseProtocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Aug 27, 2009 5:34:48 PM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Aug 27, 2009 5:34:48 PM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/11 config=null Aug 27, 2009 5:34:48 PM org.apache.catalina.storeconfig.StoreLoader load INFO: Find registry server-registry.xml at classpath resource Aug 27, 2009 5:34:48 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 110327 ms Aug 27, 2009 5:38:27 PM org.apache.coyote.http11.Http11BaseProtocol pause INFO: Pausing Coyote HTTP/1.1 on http-8080 Aug 27, 2009 5:38:28 PM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina Aug 27, 2009 5:38:29 PM org.apache.coyote.http11.Http11BaseProtocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 There's the content of catalina.out, it seems to be a stack trace or application trace of the error, is that right? Catalina.out gist on github There is a 404 error telling me this: The requested resource (/alfresco/) is not available. This is it. I think.

    Read the article

  • Windows Explorer slow to open networked computer, fast to navigate once opened

    - by Scott Noyes
    I open Windows Explorer and enter an IP for a computer on my home network (\\192.168.1.101). It takes 30 seconds or more to present a list of the shared folders. It does not appear to be an initial handshaking/authentication thing; even if I allow the view to load and then immediately load the same again, it is always slow. Once they appear, navigating through folders and opening files is fast. Also, navigating directly to a folder (\\192.168.1.101\My Music) is fast, even if it's the first connection since a restart. Using \\computerName instead of the IP address gives exactly the same results. Pings return in 1ms. net view \\computerName (or \ipAddress) returns the list of shared folders fast. This makes me suspect an Explorer issue rather than a network issue. Suspecting that the remote computer was being automatically indexed or something, I went into Tools-Folder Options-View and unchecked "Automatically search for network folders and printers," but that made no difference. De-selecting the "Folders" icon near the address bar makes no difference. Adding the IP address and computer name to the hosts file makes no difference. Both computers involved are laptops running Windows XP. Both have WiFi and cable adapters. Mine is not connected via cable. The result is the same whether the target is plugged in to the cable or not (although the IP address changes - 192.168.1.101 over cable, 192.168.1.103 over WiFi.) We are using DHCP assigned by the router.

    Read the article

  • Restricting memory area for linux kernel

    - by user1066789
    I am running ltib linux on P1022RDK (P1022 Core) platform. I have 512 MB = 0x20000000 memory. I want my linux kernel to use second half of the board memory (i.e from 256 MB to 512 MB) and want first half of memory to be reserved for some other purpose. For this I am building linux kernel using ltib. For that purpose I am setting following kernel configuration. Please suggest if I am doing it the right way. CONFIG_LOWMEM_SIZE = 0x10000000 # 256 MB CONFIG_PHYSICAL_START = 0x10000000 # Starting from 256MB (second half of memory) On the Uboot I am loading the kernel as following way setenv loadaddr 0x11000000 # Kernel base = 0x10000000 + 0x01000000 (offset) setenv fdtaddr 0x10c00000 # Kernel base = 0x10000000 + 0x00c00000 (offset) bootm $loadaddr - $fdtaddr My kernel Load address is 0x10000000 & kernel entry point is 0x10000000 Doing above configuration / steps my kernel stuck at following on Uboot ## Booting kernel from Legacy Image at 11000000 ... Image Name: Linux-2.6.32.13 Image Type: PowerPC Linux Kernel Image (gzip compressed) Data Size: 3352851 Bytes = 3.2 MB Load Address: 10000000 Entry Point: 10000000 Verifying Checksum ... OK ## Flattened Device Tree blob at 10c00000 Booting using the fdt blob at 0x10c00000 Uncompressing Kernel Image ... OK ================ >> It should uncompress FDT here & continue ============== Any thoughts ?

    Read the article

  • Restricting memory area for linux kernel

    - by user1066789
    I am running ltib linux on P1022RDK (P1022 Core) platform. I have 512 MB = 0x20000000 memory. I want my linux kernel to use second half of the board memory (i.e from 256 MB to 512 MB) and want first half of memory to be reserved for some other purpose. For this I am building linux kernel using ltib. For that purpose I am setting following kernel configuration. Please suggest if I am doing it the right way. CONFIG_LOWMEM_SIZE = 0x10000000 # 256 MB CONFIG_PHYSICAL_START = 0x10000000 # Starting from 256MB (second half of memory) On the Uboot I am loading the kernel as following way setenv loadaddr 0x11000000 # Kernel base = 0x10000000 + 0x01000000 (offset) setenv fdtaddr 0x10c00000 # Kernel base = 0x10000000 + 0x00c00000 (offset) bootm $loadaddr - $fdtaddr My kernel Load address is 0x10000000 & kernel entry point is 0x10000000 Doing above configuration / steps my kernel stuck at following on Uboot ## Booting kernel from Legacy Image at 11000000 ... Image Name: Linux-2.6.32.13 Image Type: PowerPC Linux Kernel Image (gzip compressed) Data Size: 3352851 Bytes = 3.2 MB Load Address: 10000000 Entry Point: 10000000 Verifying Checksum ... OK ## Flattened Device Tree blob at 10c00000 Booting using the fdt blob at 0x10c00000 Uncompressing Kernel Image ... OK ================ It should uncompress FDT here & continue ============== Any thoughts ?

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • How can I debug solutions in Visual Studio 2010 from a network share?

    - by alastairs
    I've recently got a new Mac laptop and am running VS2010 in a Parallels virtual machine. It's mostly working out well for me, but I'm having some problems with debugging specific project types, related to the fact that the projects are being accessed via a network share. Test projects don't run because the test runner can't load the tests' DLL. Web projects fail to run in the Visual Studio mini web server, throwing the following exception: 'An error occurred loading a configuration file: Failed to start monitoring changes to path\to\web.config'. I've spent the evening trawling the web with little luck on this. After reading these two posts, I tried out the usual CasPol changes, but then found this post from one of the early VS2010 betas indicating that CasPol is no longer needed/supported in .NET 4.0 and VS2010. The network share is accessible via both a mapped drive and the UNC path. The virtual machine runs its applications under the administrator account, which appears to have all the necessary permissions on the network share to create, read, write and delete files and folders. I say "appears to have" as I can't view the Security Properties of the appropriate folder via Explorer: the Security tab just isn't present. Has anyone managed to successfully load and debug web and test projects from a network share in VS2010?

    Read the article

  • Unable to understand why Alfresco doesn't start on Tomcat

    - by Infernalsirius
    I have a problem that I've been inspecting for a while now, googling and everything but could not begin to understand. I'm really not used to java, even less tomcat. So there it is. First, the setup. Centos 5.3 on a virtualized server. Bitnami Native Alfresco stack (tomcat5.5, mysql5, java, javajdk, JDBC) Content of catalina.log. Since it's the shortest and where I found my first clue to what is going wrong: SEVERE: Error listenerStart Aug 27, 2009 5:32:58 PM org.apache.coyote.http11.Http11BaseProtocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Aug 27, 2009 5:32:58 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 229 ms Aug 27, 2009 5:32:58 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Aug 27, 2009 5:32:58 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/5.5.25 Aug 27, 2009 5:32:58 PM org.apache.catalina.core.StandardHost start INFO: XML validation disabled Aug 27, 2009 5:34:47 PM org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart Aug 27, 2009 5:34:47 PM org.apache.catalina.core.StandardContext start SEVERE: Context [/alfresco] startup failed due to previous errors Aug 27, 2009 5:34:48 PM org.apache.coyote.http11.Http11BaseProtocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Aug 27, 2009 5:34:48 PM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Aug 27, 2009 5:34:48 PM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/11 config=null Aug 27, 2009 5:34:48 PM org.apache.catalina.storeconfig.StoreLoader load INFO: Find registry server-registry.xml at classpath resource Aug 27, 2009 5:34:48 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 110327 ms Aug 27, 2009 5:38:27 PM org.apache.coyote.http11.Http11BaseProtocol pause INFO: Pausing Coyote HTTP/1.1 on http-8080 Aug 27, 2009 5:38:28 PM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina Aug 27, 2009 5:38:29 PM org.apache.coyote.http11.Http11BaseProtocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 There's the content of catalina.out, it seems to be a stack trace or application trace of the error, is that right? Catalina.out gist on github There is a 404 error telling me this: The requested resource (/alfresco/) is not available. This is it. I think.

    Read the article

  • Troubleshooting loss of network connectivity in Windows 2003 - What else to check?

    - by Benny
    We are facing a weird problem in our data center. Our Backup server (running EMC Networker) loses network connection every alternate day around 3:00 AM (Backup schedule starts at midnight). After 2 hours of outage, the network connectivity recovers automatically and back to normal. What we observed: It is unlikely to be network issue, since it is directly connected to server farm switch (layer 2 connection without any intermediate hops). Further, the server is connected to two different switches for Load balancing using Broadcomm Teaming. a) If it were a switch related issue it is unlikely that both the network ports go down, since they are connected to different switch. b) A possibility Vlan wide issue is also ruled out since other devices in the same Vlan are fine. c) Switch interface status is always up. But there are lot of packet drops during the outage period - Can be attributed to high interface utilization of the backup server (near 100%) d) Connectivity is restored without any change on network. Next suspect is resource utilization on Windows server. Both CPU and Memory have rarely exceeded 80%, but NIC card utilization is alarmingly high (near 100%) Not really sure how to investigate this?

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >