Search Results

Search found 13404 results on 537 pages for 'george host'.

Page 449/537 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • WebHost Manager - Apache's VHost isn't matching the DNS entry

    - by Trans
    I've used CPanel's WebHost Manager to create a new host on my VPS. I then used my HOSTS file to point fake.com to the relevant IP address. The problem I'm having now is, Apache isn't recognizing the VHost,or something, as it's just loading the default entry and 404'ing every document I try to GET. Here's the VHost entry NameVirtualHost 0.0.0.209:80 NameVirtualHost 0.0.0.211:80 <VirtualHost 0.0.0.209:80> ServerName fake.com ServerAlias www.fake.com DocumentRoot /home/fakecom/public_html ServerAdmin [email protected] ## User fakecom # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup fakecom fakecom </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup fakecom fakecom </IfModule> CustomLog /usr/local/apache/domlogs/fake.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." CustomLog /usr/local/apache/domlogs/fake.com combined Options -ExecCGI -Includes RemoveHandler cgi-script .cgi .pl .plx .ppl .perl ScriptAlias /cgi-bin/ /home/fakecom/public_html/cgi-bin/ </VirtualHost> I've Google'd this profusely and all that's being returned is 'DNS errors. Wait for it to propagate'. That's obviously not my problem, since I'm using HOSTS. What else could be causing this? :/ EDIT: Forgot to mention. http://fake.com/~fakecom/test.html loads just fine. So the fake.com is pointing to the right IP.

    Read the article

  • Unable to login through varnish cache

    - by ArunS
    I am setting up Active Collab Site in my new server. The setup is like below Internet --- varnish ---- apache But i am not able to login to the site through varnish cache.. But i can login to site through apache. Here is my VCL file backend default { .host = "localhost"; .port = "8080"; } acl purge { "localhost"; } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_fetch { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; }} When i try to login through varnish i was redirect back to login page. If i enter wrong password, then it will ask for enter correct password.

    Read the article

  • NTOP gives warnings on startup

    - by FR6
    I just installed ntop 1.4.4 and when I start it, it give me infinite warnings "packet truncated": ... RRD_DEBUG: umask 0066 RRD_DEBUG: DirPerms 0700 THREADMGMT: RRD: Started thread (t2992630672) for data collection THREADMGMT[t2992630672]: RRD: Data collection thread starting [p30923] INIT: Created pid file (/var/run/ntop.pid) THREADMGMT[t3086329552]: ntop RUNSTATE: INITNONROOT(3) Now running as requested user 'nobody' (99:99) Note: Reporting device initally set to 0 [eth0] (merged) THREADMGMT[t3086329552]: ntop RUNSTATE: RUN(4) THREADMGMT[t2982140816]: NPS(1): Started thread for network packet sniffing [eth0] THREADMGMT[t2982140816]: NPS(eth0): pcapDispatch thread starting [p30923] THREADMGMT[t2982140816]: NPS(eth0): pcapDispatch thread running [p30923] THREADMGMT[t3047009168]: SIH: Idle host scan thread running [p30923] THREADMGMT[t3057499024]: SFP: Fingerprint scan thread running [p30923] **WARNING** packet truncated (8814->8232) **WARNING** packet truncated (10274->8232) **WARNING** packet truncated (8814->8232) **WARNING** packet truncated (8814->8232) ... Do I need to configure something? I tried to access the web interface (http://localhost:3000) but it does not work. Note: I'm on CentOS. EDIT: Not sure if it helps but there is my "ifconfig": eth0 Link encap:Ethernet HWaddr 00:16:76:BC:7E:77 inet addr:192.168.0.221 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::216:76ff:febc:7e77/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15496640 errors:0 dropped:0 overruns:0 frame:0 TX packets:19256813 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:836230629 (797.4 MiB) TX bytes:608496148 (580.3 MiB) Memory:dffe0000-e0000000

    Read the article

  • Ruby net:LDAP returns "code = 53 message = Unwilling to perform" error

    - by Yong
    Hi, I am getting this error "code = 53, message = Unwilling to perform" while I am traversing the eDirectory treebase = "ou=Users,o=MTC". My ruby script can read about 126 entries from eDirectory and then it stops and prints out this error. I do not have any clue of why this is happening. I am using the ruby net:LDAP library version 0.0.4. The following is an excerpt of the code. require 'rubygems' require 'net/ldap' ldap = Net::LDAP.new :host => "10.121.121.112", :port => 389, :auth => {:method => :simple, :username => "cn=abc,ou=Users,o=MTC", :password => "123" } filter = Net::LDAP::Filter.eq( "mail", "*mtc.ca.gov" ) treebase = "ou=Users,o=MTC" attrs = ["mail", "uid", "cn", "ou", "fullname"] i = 0 ldap.search( :base => treebase, :attributes => attrs, :filter => filter ) do |entry| puts "DN: #{entry.dn}" i += 1 entry.each do |attribute, values| puts " #{attribute}:" values.each do |value| puts " --->#{value}" end end end puts "Total #{i} entries found." p ldap.get_operation_result Here is the output and the error at the end. Thank you very much for your help. DN: cn=uvogle,ou=Users,o=MTC mail: --->[email protected] fullname: --->Ursula Vogler ou: --->Legislation and Public Affairs dn: --->cn=uvogle,ou=Users,o=MTC cn: --->uvogle Total 126 entries found. OpenStruct code=53, message="Unwilling to perform"

    Read the article

  • Webserver python update script

    - by ThePyCoder
    So i have made this website on which you can trade stocks based on real stock quotes with virtual money. The stock quotes are in a MySQL database and are updated using a python script which runs every minute or so. Now, this works fine on my local machine with xampp but how about moving the project to a commercial web server? Basically I want my page hosted by a professional company but do those kind of servers support python scripts running in the background? Because a dedicated server would be to expensive and the script does some other sql tasks too so it can't be replaced by PHP or so... So, are there any good web hosting services out there who give me the possibility of running a script in the background and hosting a website in the foreground? For what server specifications do i have to look for? Thnx in advance! PS: I've done some research, and I found a python supporting web host WITH ssh support. Is that what I need? Or is the ssh not allowed to start processes?

    Read the article

  • Not getting IP from ISP on Multicast Network

    - by Johan Nielsen
    Im having an odd issue with my ISP (COMX.dk) I have a managed access gateway box (Telsay) with three 8P8C ports for use with Internet and Ip-Tv (respectively on different VLANS (so does my ISP tell me)) To utilize a port you will need to register your device's mac address through an online interface. You will then get your device paired with a static ip. I am using one port actively and I have registered another device (router). The router is configured to listen for an active dhcpd on the network. When my router get a lease I get a private ip 192.168.2.2 (not the one bound to my mac) which is odd! I unconnected my router from the gateway and connected my laptop directly. Same thing happened - I was given a private address. I did a port scan on the gateway and found port 80 to be open and browsed to the ip. I was then presented with a management interface of a Belkin wireless router (HMMM!!!!) <--by the way, not my gear At this point I called the ISP to let them know of my issue/findings - Only to be replied "Well, we cant see any rogue dhcp servers" (thinking to myself, well I can) I then decided that it could be fun to try the other port of my gateway, only to experience the same. So I reconnected my router and used the remaining port to make an observer(wireshark promic etc.) I am able to see my router trying to discover a dhcp server but I can also see my ISP's IGMP and PIMv2 packages just repeating the same pattern. Hello...Hello...Hello :) So I called them again, only to get the same response, "we dont see any rogue dhcp's...we cant see the host you are talking to (mac address of the Belkin router)...you are definitively connected through wireless?!?(no im not, no such thing as a wireless wire - i thought to myself)" My questions is, What is going on? (besides from what im reporting here) What am I seeing that the don't? What can I tell them in order for them to resolve mine/their issue?

    Read the article

  • Set up multiple websites on a local web server

    - by mickburkejnr
    I have spent the last few days setting up a CentOS 6 server on my local network so that I can host multiple projects that I'm currently working on. Everything has been set up so that I access the server by typing 192.168.1.10 and the Apache test page comes up. What I'm aiming to do is to access different projects by typing in 192.168.1.10/project, and then view the project as if it was on it's own standalone server. I have thought about just sticking these sites inside folders on the server then accessing them that way, but a lot of my projects use CakePHP so this isn't feasible. So what I need to do is create VirtualHosts in Apache to allow me to do this, but without using a domain name. I want to stick to using the IP address of the machine (which is static). Any ideas? EDIT I've followed Peter's suggestion, but now I have a new problem. In the httpd.conf file I have entered the following information: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /www/html/project1 ServerName local.project1.com ErrorLog logs/local.project1.com-error_log CustomLog logs/local.project1.com-access_log common </VirtualHost> And now Apache is saying: Starting httpd: Warning: DocumentRoot [/www/html/project1] does not exist When it clearly does exist. I've disabled SELinux and I can confirm this isn't turned on. I've also checked the ownership of the folder, and its owned by root. I can also save files to these folders using a guest FTP account (which isn't associated to root), so the folders are being listed and can be written to. But when I try the folder in a web browser it doesn't seem to work either. I've also done a reboot of the server and the problem persists. What should I change in order to resolve this?

    Read the article

  • Unable to record using Jmeter: [help me very urgent]

    - by krish
    Hi, I am trying to record a http web page using Jmeter 2.3.3 version.I has setup the JMeter proxy and tried, but did n't work. I have followed the below steps. 1. Launch jmeter 2.3.3, added thred group to test plan 2. Under Workbench-add-non-test elements- added HTTP proxy server. proxy server setting are port:9090, target:use recording controller, grouping:donot group samplers, Type:HTTp request and checked the boxes of all under http sampler settings 3.Saved the settings 4. Now in browser(IE 7.0 or firefox 3.0.16), under connection settings, setup the manual proxy settings as local host and port as 9090(no auto detect settings nothing, only manual proxy). Setting saved 5.Now in the jmeter, started the http proxy server. 6. Open a browser and hit the webpage needs to be tested. The page is not opened. In fact because of the changes made in browsers, no pages are opened. Whenever i try hitting a page, the pages are recorded in the Jmeter. but without the page open, how can i test. I looking for an immediate answer and my work is blocked. Immediate answer would be appreciated.

    Read the article

  • Apache Name-based Virtual Hosts - configuring httpd.conf file

    - by Dave
    Hi there. I am running a web app on Tomcat at the following location on my server. /var/tomcat/webapps/SoccerApp I am looking to update the Tomcat httpd.conf file with the following virtual host... <VirtualHost *:80> DocumentRoot /var/tomcat/webapps/SoccerApp/MyTeam ServerName www.mysoccerapp.com </VirtualHost> This gives me a 404 error as the directory MyTeam does not exist. However my application behaves in a way that it uses this URL directory as the name of the soccer team for which to display data, so it will never be a physical folder on the server. None the less, I would like www.mysoccerapp.com to resolve to webapps/SoccerApp/MyTeam, even though the directory isnt there. does this make any sense? Any ideas on how to get this working. At the end of the day, i want to do the following... www.teamone.com -> runs /webapps/SoccerApp/TeamOne www.teamone.com -> runs /webapps/SoccerApp/TeamTwo ...where TeamOne and TeamTwo are not physical directories, but merely processed by my SoccerApp application as the current soccer team to display data for. Many many thanks! Dave.

    Read the article

  • Unable to record using Jmeter

    - by krish
    Hi, I am trying to record a http web page using Jmeter 2.3.3 version.I has setup the JMeter proxy and tried, but did n't work. I have followed the below steps. Launch jmeter 2.3.3, added thred group to test plan Under Workbench-add-non-test elements- added HTTP proxy server. proxy server setting are port:9090, target:use recording controller, grouping:donot group samplers, Type:HTTp request and checked the boxes of all under http sampler settings Saved the settings Now in browser(IE 7.0 or firefox 3.0.16), under connection settings, setup the manual proxy settings as local host and port as 9090(no auto detect settings nothing, only manual proxy). Setting saved Now in the jmeter, started the http proxy server. Open a browser and hit the webpage needs to be tested. The page is not opened. In fact because of the changes made in browsers, no pages are opened. Whenever i try hitting a page, the pages are recorded in the Jmeter. but without the page open, how can i test. I looking for an immediate answer and my work is blocked. Immediate answer would be appreciated.

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • Nginx wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Linux/hostapd: AP can ping clients, clients can access internet, can't access www@wlan1 with more than 5-6 packets at once

    - by mhambra
    Please edit the title, can't make it sound better. -- OP. Hi all, I have a Wifi USB dongle in a PC, that serves as an AP for laptop. wlan1: 192.168.2.1, netmask 255.255.255.0, routed: route add -net 192.168.2.0 netmask 255.255.255.0 gw 192.168.2.1 ping 192.168.2.2 (laptop): ping was ok for lot of packets. Now, I try to access 192.168.2.1:80/myindex.html (apache) from laptop, and can see that own 1kb test page. But, trying to access 192.168.2.1:80/my.jpg, I see the following: GET /my.jpg HTTP/1.1 200 OK <jpg header, about a kilobyte> <TCP packet retransmisson> <TCP packet retransmisson> <end of stream> It seems to be a hostapd's problem (networked stuff worked fine with Ad-Hoc), but it may be also forwarding/routing problem too. What to google for? Even more strange, SSH to that host works fine.

    Read the article

  • HTTP cache for my virtual machines

    - by MathematicalOrchid
    I have several Linux virtual machines running on my home PC. One of the quirks of Linux is that every time you run a package manager, it wants to "refresh" the configured software repositories - which basically means it wants to download a file from the Internet. If I revert to an earlier snapshot of the VM, then next time I run the package manager it will re-download the exact same data again [since it no longer exists in the VM]. It seems a shame to waste bandwidth endlessly downloading the same data over and over again, so I was wondering if there's some way I can set up some kind of HTTP proxy server that caches downloaded files. I have no idea how you would do such a thing though. In particular, it needs to be set up so that the VMs don't need to "know" that the cache is there; it needs to be transparent. But I don't know how to do that. Any suggestions on what software I'd need to use? It would be nice if I could run it under the Windows host OS, but running a small VM with a Linux guest is also possible...

    Read the article

  • Apache Server not working in MAMP

    - by jasonaburton
    Here's what I did before the problem started: I was creating a database for a site that I am working on in phpMyAdmin. I wrote some code to try to connect to the database I just created and I couldn't connect. I assumed it might be because I needed a password to connect to the database, so I created a password for it. Immediately after I created the password phpMyAdmin kicked me out saying: "#1045 - Access denied for user 'root'@'localhost' (using password: YES)" "phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server." I found the php.ini file and searched for where I could change the password to match the one I just made, but couldn't find where I needed to change it. So I decided to scrap the database and uninstall MAMP from my computer and reinstall it hoping it would just reset all the defaults and I could go on my merry way. But now after reinstalling MAMP and trying to run the servers Apache won't start up and I have no idea why. One problem after another... Any advice or helpful ideas?

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Archive Manager, SQL 2005 and MaxTokenSize high CPU

    - by Tim Alexander
    So, I posted this question a few days ago: Impact of increasing the MaxTokenSize for Kerberos Tickets Since then the thought was to test our settings on two member servers, one with IIS and one without. I setup two GPOs to configure the MaxTokenSize reg setting to 48000 and MaxFieldLength/MaxRequestBytes to 64200 (based on MS KB2020943, these are set at 4/3 * T + 200). The member server seemed to work ok (a devalued tape backup server). The IIS server however has had some strange repercussions. The IIS Sserver host Quest Software Archive Manager (AM) 4.5 that communicates with SQL Server 2005 Enterprise on Server 2003 R2. After the changes all looked good until the SQL Server hit 100% CPU. I have removed the GPOS, removed the reg values and even replaced them with defaults (12000 for token size and can't remember the other one but was in a blog post about the issue in my other post). No change. Bouncing the IIS Server stops the high CPU and a colleague has looked at the SQL server and it is definitely the AM connection taking up the time/work on the SQL server. I haven't changed the reg values on the SQL server or the DCs but am reluctant to do so without understanding why this has happened. I am guessing its to do with the overriding auth and group issue we have but I am not seeing Kerberos errors in either event log. Has anyone seen something similar or does anyone have some tips? Was definitely blindsided by the Kerberos issue and am swimming against the tide to keep things functioning.

    Read the article

  • How to place a virtual machine in DMZ?

    - by Giordano
    I have an Ubuntu 12.04 server running few virtual machines with KVM. I would like to expose some of these virtual machines on the internet, to make it possible for customers to test the products we're developing and make available other products for demo purposes. One of the server NICs is configured with a public IP. However before exposing anything on the web I would like to be sure that if one of the virtual machines get compromised, the attacker doesn't reach the rest of the hosts. What I would like to do is to put these virtual machines into a DMZ. These are the steps I'm planning to do: Create a tap interface in the virtualization host (let's say tap1) Create a bridge using tap1 and give it an IP in a subnet separate from the other hosts. Let's say 10.0.0.1 Attach the DMZ virtual machines to the bridge and configure their IP statically (10.0.0.2, 10.0.0.3, etc...) Using UFW, forbid any traffic from 10.0.0.0/24 to any of the internal hosts, allow the traffic from the internal hosts towards 10.0.0.0/24 and expose the virtual machines on the web using port forwarding. Do you think this setup is safe? Can you suggest any improvement or a better/safer approach? Thanks in advance!

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • Flickering dual screens in Virtual Box Ubuntu 13.10 Guest

    - by alexleonard
    I have Ubuntu 13.10 x64 installed as a guest in VirtualBox (under a Windows 8.1 host) and have the settings for the virtual machine setup to run with a monitor count of 2, 128MB video memory and 3D acceleration enabled. In my guest I have the virtual box additions installed (which allowed me to have two 1920x1080 screens). Here's a screenshot of my VM settings. My laptop is an Asus N550JV which has both Intel's HD Graphics 4600 GPU and Nvidia's GeForce GT 750M. By default though I believe the Intel GFX card is being used to render the VM. When I boot up the VM it loads perfectly on dual screens, however whenever I move the mouse from one screen to the other (I have a Dell S2340L running over a HDMI connection as a second screen) the screen flickers. I've tried a variety of settings changes in both Ubuntu and the VM settings, but cannot seem to stop this screen flicker. I also used the NVidia control panel in Windows to force the dedicated graphics card to always be used but found that the display driver sometimes crashed whilst working in the VM, resulting in my VM session being destroyed, so I figured it's better to stick with the Intel GFX as that appears to be more stable. I also tried without 3D acceleration but that was much worse, and if I ran the VM with a low amount of graphics memory it really struggled. Here's my dmesg output: http://pastebin.com/1LJuYWMj (not sure if this is helpful in this situation). I read some posts suggesting changes to /etc/X11/xorg.conf but I don't appear to have an xorg.conf file. There were also a few posts (though related to Synergy) suggesting running xset -dpms but this command doesn't appear to have had any effect for me. As an additional note, I'm finding that window drawing in the guest is a little laggy/glitchy. For example, quickly scrolling through a web page may result in parts of the viewport displaying original content. Certainly I notice drawing issues most in the web browser, but it also impacts other software with parts of the window not being drawn when, say, switching between accounts in thunderbird. Any suggestions greatly appreciated!

    Read the article

  • Why do HTTP loopback connections not work on my subdomains?

    - by memeLab
    I have a shared hosting account at Jumba running Linux kernel 2.6.9-103.ELsmp (don't know if that helps) with cpanel 1.0 (RC1). I am using the WordPress plugin Backup Buddy, which requires HTTP loopback connections to monitor / complete backups. This works fine on memelab.com.au, but doesn't work at any subdomain (e.g.: staging.memelab.com.au). Is it possible to setup an A record or some such to remedy this? I'm aware of a workaround, (setting WP_ALTERNATE_CRON) but I find this unsatisfactory due to the messy URLs. BackupBuddy:_Frequent_Support_Issues#HTTP_Loopback_Connections_Disabled Here is the reply from my host: …as main domain have it's own separate DNS entry it have localhost entry which helps for looback connections where as subdomains don't have separate DNS zone, so it is not possible to create looback connections for it. I have cpanel access to the 'advanced zone editor' - is there anything tricky I can do there? maybe 127.0.0.2? (I remember reading that there were at least 8 available local IPs available on (some) Linuxes.) All the A records point to the server IP, with the exception of localhost.memelab.com.au which points to 127.0.0.1. I've just tried entering a new A record: localhost.itours.memelab.com.au pointing to 127.0.0.2. I still get the warning in Backup Buddy that loopback is not active, and Cpanel won't let me enter 127.0.0.1 (guess it doesn't work like that!) nslookup itours.memelab.com.au Server: 203.88.112.33 Address: 203.88.112.33#53 Non-authoritative answer: Name: itours.memelab.com.au Address: 117.55.224.177

    Read the article

  • Debian Stable: Can't update kernel, libc won't update.

    - by pascal
    I use Debian Stable (squeeze) on a virtual host where I can't touch the kernel, it's stuck (and will be for some time as support told me) at Linux 2.6.18-028stab070.3 #1 SMP Wed Jul 21 18:33:27 MSD 2010 x86_64 So when I try to update, several packages fail with FATAL: kernel too old for example Preparing to replace libgcc1 1:4.6.0-11 (using .../libgcc1_1%3a4.6.1-1_amd64.deb) ... Unpacking replacement libgcc1 ... Setting up libgcc1 (1:4.6.1-1) ... FATAL: kernel too old Segmentation fault dpkg: error processing libgcc1 (--configure): subprocess installed post-installation script returned error exit status 139 and some version chaos ensued: The following packages have unmet dependencies: libc-dev-bin : Depends: libc6 (> 2.13) but 2.11.2-13 is installed libc6 : Depends: libc-bin (= 2.11.2-13) but 2.13-5 is installed libc6-dev : Depends: libc6 (= 2.13-5) but 2.11.2-13 is installed libquadmath0 : Depends: gcc-4.6-base (= 4.6.0-2) but 4.6.0-11 is installed libstdc++6 : Depends: gcc-4.6-base (= 4.6.0-2) but 4.6.0-11 is installed locales : Depends: glibc-2.13-1 What should I do? I want to keep the system up-to-date, so I want to pin as few packets as possible, but I also don't want to have to compile anything manually. Trying to pin the status quo and figured out where the error came from: ldconfig segfaults. -v doesn't print anything more so I can't tell what's the actual problem. # ldconfig FATAL: kernel too old

    Read the article

  • VMware "boot screen" - add info like contact, phone number, etc.?

    - by TheCleaner
    I've tried searching Google and VMware's KB but maybe I'm not typing the right search criteria...only finding ways to fix problems with booting or screen issues. On the default boot screen of a host it looks similar to this picture from GIS: I'm curious if it is possible somehow to make it look like this instead (adding custom details): I know for the most part the info is "useless" since it is administered remotely, etc. But when I deploy standalone hosts to branch offices, it'd be nice for them to see this type of info on the boot screen. I may also include the VMs hosted on it (again on standalone hosts). Normal monitoring, etc. will be done remotely. This is strictly for odd times when the branch contact may say "the electricians are saying they need to turn off this circuit but I have no idea who to call in IT to tell them this box needs to be shut down" or similar. Anyone who has dealt with small branch offices can tell you that if it isn't labeled they easily forget what it is for and will simply say after the incident "I didn't know what it was or who to call." Possible?

    Read the article

  • RTNETLINK answers: File exists... maybe because assigned a new mac adress

    - by steven
    I got a "RTNETLINK answers: File exists Failed to bring up eth0:1" on "ifup eth0:1". I suspect it happens because i assigned a new mac adress in my VM's network adapter. Can you tell me how to fix the issue? My configuration looks like this: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 allow-hotplug eth0 iface eth0 inet static address 192.168.1.80 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 192.168.1.1 # Alias being connected to 192.168.10.x Network auto eth0:1 allow-hotplug eth0:1 iface eth0:1 inet static address 192.168.10.83 netmask 255.255.255.0 gateway 192.168.10.10 dns-nameservers 192.168.10.1 Why do I get "RTNETLINK answer: File exists.." suddenly? I worked with this configuration before without problems. All i did in the past is to renew the adapters mac adress. At the moment I am connected to the 192.168.10.x Network and if I do /etc/init.d/networking stop /etc/init.d/networking start then i got "RTNETLINK [...] falied to bring up eth0:1" but the strage thing is that i am able to connect to 192.168.10.83 via ssh from my host machine. But I cannot reach the internet from the debian client. I hope it is clear what my problem is, now. update if i change my /etc/network/interfaces like this then "ifup eth0" fails, too with the same error! # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 allow-hotplug eth0 iface eth0 inet static address 192.168.10.83 netmask 255.255.255.0 gateway 192.168.10.10 dns-nameservers 192.168.10.1 with verbose option enabled i got: Configuring interfache eth0=eth0 (inet) run-parts --verbose /etc/network/if-pre-up.d ip addr add 192.168.10.83/255.255.255.0 broadcast 192.168.10.255 dev eth0 label eth0 RTNETLINK answers: File exists Failed to bring up eth0. same if i type this manually: ip addr add 192.168.10.83/255.255.255.0 broadcast 192.168.10.255 dev eth0 label eth0

    Read the article

  • postfix not sending domain mail to mx

    - by orlandoresorts
    I'm trying to get postfix to forward email to my domain which is hosted by gmail. As I don't have any users on my server nor do I want to. Here's how I have things set up.. LEt's say you and I have a domain called mcdonalds.com the registrar has mcdonalds.com MX records pointing to gmail. (everything works for like a year) Now we set up a server to host a website. Then we create a mail account called [email protected] and send mail locally from the server using roundcube. This works. We can send mail to cnn.com we can send mail to serverfault.com we can email any/everyone. BUT we cannot send mail to our own domain mcdonalds.com So I cannot email [email protected] I cannot email [email protected] I cannot email [email protected] It gives the error: SMTP Error (450): Failed to add recipient "[email protected]" (4.1.1 : Recipient address rejected: User unknown in virtual mailbox table). I'm guessing because it is looking at the local server to find the mailbox and it doesn't exist. So how to I tell the server for any mail going to mcdonalds.com for [email protected] to send to my external mail server and NOT to lookup on the local www box we set up with zpanel. Any ideas?

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >