Search Results

Search found 52885 results on 2116 pages for 'http redirect'.

Page 796/2116 | < Previous Page | 792 793 794 795 796 797 798 799 800 801 802 803  | Next Page >

  • ProxyPass for specific vhost with mod_rewrite

    - by Steve Robbins
    I have a web server that it set up to dynamically server different document roots for different domains <VirtualHost *:80> <IfModule mod_rewrite.c> # Stage sites :: www.[document root].server.company.com => /home/www/[document root] RewriteCond %{HTTP_HOST} ^www\.[^.]+\.server\.company\.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^www\.([^.]+)\.server\.company\.com(.*) /home/www/$1/$2 [L] </IfModule> </VirtualHost> This makes it so that www.foo.server.company.com will serve the document root of server.company.com:/home/www/foo/ For one of these sites, I need to add a ProxyPass, but I only want it to be applied to that one site. I tried something like <VirtualHost *:80> <Directory /home/www/foo> UseCanonicalName Off ProxyPreserveHost On ProxyRequests Off ProxyPass /services http://www-test.foo.com/services ProxyPassReverse /services http://www-test.foo.com/services </Directory> </VirtualHost> But then I get these errors ProxyPreserveHost not allowed here ProxyPass|ProxyPassMatch can not have a path when defined in a location. How can I set up a ProxyPass for a single virtual host?

    Read the article

  • IP-dependent local port-forwarding on Linux

    - by chronos
    I have configured my server's sshd to listen on a non-standard port 42. However, at work I am behind a firewall/proxy, which only allow outgoing connections to ports 21, 22, 80 and 443. Consequently, I cannot ssh to my server from work, which is bad. I do not want to return sshd to port 22. The idea is this: on my server, locally forward port 22 to port 42 if source IP is matching the external IP of my work's network. For clarity, let us assume that my server's IP is 169.1.1.1 (on eth1), and my work external IP is 169.250.250.250. For all IPs different from 169.250.250.250, my server should respond with an expected 'connection refused', as it does for a non-listening port. I'm very new to iptables. I have briefly looked through the long iptables manual and these related / relevant questions: http://serverfault.com/questions/57872/iptables-question-forwarding-port-x-to-an-ssh-port-of-different-machine-on-the-n http://serverfault.com/questions/140622/how-can-i-port-forward-with-iptables However, those questions deal with more complicated several-host scenarios, and it is not clear to me which tables and chains I should use for local port-forwarding, and if I should have 2 rules (for "question" and "answer" packets), or only 1 rule for "question" packets. So far I have only enabled forwarding via sysctl. I will start testing solutions tomorrow, and will appreciate pointers or maybe case-specific examples for implementing my simple scenario. Is the draft solution below correct? iptables -A INPUT [-m state] [-i eth1] --source 169.250.250.250 -p tcp --destination 169.1.1.1:42 --dport 22 --state NEW,ESTABLISHED,RELATED -j ACCEPT Should I use the mangle table instead of filter? And/or FORWARD chain instead of INPUT?

    Read the article

  • DNS Round-robin, Load Balancing, Load sharing, and failover in 2012

    - by user1089770
    I have been reading many posts on serverfault as well as on other sites regarding all these. What I understand is, Multiple A records(round-robin dns) can be used for both : Load sharing (round-robin, but NOT load-balancing). Many people say that “Load Balancing” but I think there will be no load-balancing because “Balance” means (literally) “compare two(or more) and adjust” (and that is what Real s/w or h/w Load balancers do) but Browsers never do this, instead they randomly select and IP and connect to it. It doesn't have any knowledge about the current load of that server (probably, the IP it picked had the highest load!). Automatic failover (latest browsers only). Yes, I think DNS can be used as a simple failover system (at least in 2012, I dont know when it actually "came in effect"). please refer to : http://webmasters.stackexchange.com/questions/10927/using-multiple-a-records-for-my-domain-do-web-browsers-ever-try-more-than-one and Browser-based DNS failover using multiple A records and http://www.nber.org/sys-admin/dns-failover.html I would like to make sure my assumptions/findings are right. So let me know please.....

    Read the article

  • Connecting jconsole using SOCKS to Amazon EC2

    - by freshfunk
    I'm trying to use jconsole to view stats on an EC2 instance by using a socks proxy created by SSH. I've tried the various scripts mentioned in the links below but to no avail: http://simplygenius.com/2010/08/jconsole-via-socks-ssh-tunnel.html http://gabrielcain.com/blog/2010/11/02/using-ssh-proxying-to-connect-jconsole-to-remote-cassandra-instances/ I'm running ssh -f -ND 8123 myuser@mymachine and verified that at least Firefox goes through it as a proxy. I then run jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=8123 service:jmx:rmi:///jndi/rmi://ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com:8080/jmxrmi I run netstat -n on my EC2 instance and I see a connection created by my machine. However, the connection eventually disappears and I get a 'channel 2: open failed: connect failed: Operation timed out' from my ssh tunnel. I've opened the jmx port through the security group and I've checked the port on the EC2 instance to make sure it's open (by telnet-ing to it). I'm not sure where to look next. Are there some properties in sshd_config or ssh_config I need to enable for tunneling? Or anything in Mac OS X? I feel like a serious noob but sys administration is really not my strong point. I've spent several hours and can't get this to work.

    Read the article

  • Adding a GET parameter to URL causes 404 error

    - by Adrian Grigore
    I'm trying to install the syntaxhighlightter evolved plugin to my wordpress blog. I've uploaded and activated the plugin, but it did not work. I've looked into the page source code and found out that the plugin style is loaded from the following URL: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css?ver=2.0.320 This causes a 404 error (page not found). The strange thing though is that when I remove the GET parameters, the CSS loads ok: http://devermind.com/wp-content/plugins/syntaxhighlighter/syntaxhighlighter/styles/shCore.css What could be causing this problem and how can I fix this? Unfortunately I don't know how to make wordpress drop the GET parameters when loading the stylesheet. EDIT: As I just found out, this happens only in Firefox (3.0.11). IE loads both URLs above just fine. Not that this would be of any help though, so any suggestions would be appreciated. SECOND EDIT: I tried this on my laptop and it works fine with Firefox 3.08. So this really seems to be a browser problem after all.

    Read the article

  • VGA to S-Video/Video converter showing mashed up picture.

    - by Matthijs Wessels
    Earlier I asked this question: *http://superuser.com/questions/132374/does-the-lenovo-t60p-vga-port-support-an-s-video-signal As a result I acquired the following item: http://cgi.ebay.nl/ws/eBayISAPI.dll?ViewItem&item=250588098582&ssPageName=ADME:B:EOIBSA:NL:1123 Yea I'm a cheapass... Anyway, it just arrived and now I am trying to get it to work... The manual is not very informative other than telling me, this is the vga in, this is the vga out, this is the s/video out etc. Plus it tells me the system should support the following resolutions@refresh rate: 640x480@60/72/75Hz 800x600@60/75Hz 1024x768@60Hz I can connect it to my laptop and then I connect the S/Video and the Video to my tv which only gives me a blurred image (like when you set your monitor to a resolution it doesn't support). The VGA out however works fine to my tft monitor. The are two switches on the converter. I think one switches between s/video and video and the other between PAL and NTSC. But alas, no combination seems to give a better picture (it does give a different picture). Can anyone help me to solve this problem? I have downloaded this program called powerstrip, but I have no idea how to use it and if it can even solve my problem... Thanks in advance. I use Windows XP on a Lenovo t60p and I try to connect it to a Philips 32PFL7403D/12 LCD TV from VGA to a converter to S-video or video.

    Read the article

  • How to configure networking on an appliance such that it can plug and play on any corporate network?

    - by Joshua Lim
    I had a chance to configure a Moxa NPort device server appliance on my client's network, it was very easy to do so, done in just 2 minutes. Here's what I did:- The Moxa device server had a preset IP address of 192.168.127.254 and subnet mask 255.255.255.0 - http://www.moxa.com/doc/manual/nport/5400/NPort_5400_Series_Users_Manual_v4.pdf Moxa provides a Windows software which I used to "scan" for the device server. It worked like magic! The software returns a list of device servers found. Each device server is identified by MAC address, and by selecting the device server using the software, I can reset the default IP address and subnet mask of that device server! In comparison, during an earlier project, I spent 2 hours trying to get KVM to work for a Windows 7 embedded appliance I'm trying to install in my client's network - http://superuser.com/questions/380305/how-to-configure-windows-7-professional-appliance-pc-on-my-clients-network-usin Prior to that, I have already tried pre-configuring the IP address and subnet mask to the one which my client provided, yet the appliance still can't connect to the client's network! I've also tried cross cable, didn't work either. After KVM worked, I discovered that the network settings were "lost" after I plug the machine into the client's network. Now my question is what can I do to setup my Windows 7 embedded appliance so that it can connect to any network like that the Moxa device server? I tried experimenting this on my network using a Windows machine configured to an IP address of 192.168.127.254 and subnet mask 255.255.255.0, but it doesn't connect to my network that uses 192.168.0.*. :( EDIT: I would like to point out that the Moxa Windows configuration software seems to be able to connect to any Moxa device connected to the network even if it is on a different subnet, as long as the network adapter shows "connected". This is important because the Moxa device has no VGSM port or interface to configure the IP address.

    Read the article

  • My processor is running slower than usually it has to run

    - by Soham
    I've Core2Duo E7400 2.80GHz processor on my Intel D945gcnl mobo. From CPU-Z, I've get to know that my processor speed is 1596MHz with X6 multiplier and 266MHz Bus Speed on each core. Why my processor is being operated at 1596 MHz rather than 2.80GHz...!!???? From my side I've tried to disable SpeedStep from my bios by setting EIST to 'Disable' and also tried to change Power Option to 'High Performance' in Windows 7. And also done like suggested in this question:http://superuser.com/questions/119176/processor-not-running-at-max-speed But it gains me nothing. I've also tried to run few massive applications together to check whether it was increasing at that time or not, but it remains same. Should I have to increase my multiplier or overclock to gain that lost speed...??? Should I have to check my power supply for any problem..??? or anything else...??? Please help me on this.... And yeah I've desktop computer so no problem causing by battery. Here's my CPU-Z Screenshot: http://i56.tinypic.com/2lk4mqc.jpg

    Read the article

  • Has anyone managed to build php5-xapian on Ubuntu 12.04?

    - by jetboy
    As Xapian's been dropped from the Ubuntu repositories, I'm attempting to build my own .deb from the instructions here: http://article.gmane.org/gmane.comp.search.xapian.general/8855 http://beeznest.wordpress.com/2011/07/06/howto-build-your-own-binaries-of-php-xapian-bindings-for-debian/ I can only get things to progress beyond the first few seconds by leaving out 'rm debian/control', but if I do, it looks as if the Python and Ruby bindings are building and passing their versions of smoketest correctly. However, the PHP part of the build is failing with this error: /home/charlie/xapian-bindings-1.2.8/php/smoketest.php:38: include(xapian.php): failed to open stream: No such file or directory FAIL: smoketest.php There's a xapian.php file in /home/charlie/xapian-bindings-1.2.8/php/php5/ but if I copy it to /home/charlie/xapian-bindings-1.2.8/php/ or change the path to it in smoketest.php, the build fails right near the start with: dpkg-source: error: aborting due to unexpected upstream changes Unfortunately I'm out of my comfort zone building from source. Anyone got any ideas? Edit post James' answer: Builds fine if I follow instructions exactly. I built it on a test VM initially, but that didn't build the PHP package as PHP itself wasn't installed. Obvious gotcha, but worth mentioning. Installing generated the following error: Setting up php5-xapian (1.2.8-1) ... Processing triggers for libapache2-mod-php5 ... dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/libapache2-mod-php5.postinst): Permission denied ssion denied dpkg: error processing libapache2-mod-php5 (--install): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: libapache2-mod-php5 It's only a script for restarting Apache. Stopping Apache before running sudo dpkg -i php5-xapian_*.deb prevents the error. Xapian now shows up in phpinfo(). Job done. Thanks.

    Read the article

  • My linux server "Number of processes created" and "Context switches" are growing incredibly fast

    - by Jorge Fuentes González
    I have a strange behaviour in my server :-/. Is a OpenVZ VPS (I think is OpenVZ, because /proc/user_beancounters exists and df -h returns /dev/simfs drive. Also ifconfig returns venet0). When I do cat /proc/stat, I can see how each second about 50-100 processes are created and happens about 800k-1200k context switches! All that info is with the server completely idle, no traffic nor programs running. Top shows 0 load average and 100% idle CPU. I've closed all non-needed services (httpd, mysqld, sendmail, nagios, named...) and the problem still happens. I do ps -ALf each second too and I don't see any changes, only a new ps process is created each time and the PID is just the same as before + 1, so new processes are not created, so I thought that process growing in cat /proc/stat must be threads (Yes, seems that processes in /proc/stat counts threads creation too as this states: http://webcache.googleusercontent.com/search?q=cache:8NLgzKEzHQQJ:www.linuxhowtos.org/System/procstat.htm&hl=es&tbo=d&gl=es&strip=1). I've changed to /proc dir and done cat [PID]\status with all PIDs listed with ls (Including kernel ones) and in any process voluntary_ctxt_switches nor nonvoluntary_ctxt_switches are growing at the same speed as cat /proc/stat does (just a few tens/second), Threads keeps the same also. I've done strace -p PID to all process too so I can see if any process is crating threads or something but the only process that has a bit of movement is ssh and that movement is read/write operations because of the data is sending to my terminal. After that, I've done vmstat -s and saw that forks is growing at the same speed processes in /proc/stat does. As http://linux.die.net/man/2/fork says, each fork() creates a new PID but my server PID is not growing! The last thing I can think of is that all process data that proc/stat and vmstat -s show is shared with all the other VPS stored in the same machine, but I don't know if that is correct... If someone can throw some light on this I would be really grateful.

    Read the article

  • Clarification on signals (sighup), jobs, and the controlling terminal

    - by asolberg
    So I've read two different perspectives and I'm trying to figure out which one is right. 1) Some sources online say that signals sent from the controlling terminal are ONLY sent to the foreground process group. That means if want a process to continue running in the background when you logout it is sufficient to simply suspend the job (ctrl-Z) and resume it in the background (bg). Then you can log out and it will continue to run because SIGHUP is only sent to the foreground job. See: http://blog.nelhage.com/2010/01/a-brief-introduction-to-termios-signaling-and-job-control/ ...In addition, if any signal-generating character is read by a terminal, it generates the appropriate signal to the foreground process group.... 2) Other sources claim you need to use the "nohup" command at the time the program is executed, or failing that, issue a "disown" command during execution to remove it from the jobs table that listens for SIGHUP. They say if you don't do this when you logout your process will also exit even if its running in a background process group. For example: http://docstore.mik.ua/orelly/unix3/upt/ch23_11.htm ...If I log out anyway, the shell sends my background job a HUP signal... In my own experiments with Ubuntu linux it seems like 1) is correct. I executed a command: "sleep 20 &" then logged out, logged back in and pressed did a "ps aux". Sure enough the sleep command was still running. So then why is it that so many people seem to believe number 2? And if all you have to do is place a job in the background to keep it running why do so many people use "nohup" and "disown?"

    Read the article

  • Looking for a host based network monitor solution

    - by Ole Martin Handeland
    Hi all! Problem So, my hosting company has a network usage graph for my dedicated server. It seems that one day earlier this month, my network usage suddenly spiked with several hundred megabytes transferred (usually it's in the tens, not hundreds). It was probably me, but i just can't be sure who or what it was. Question So my question is; does anyone know of any host based solution for monitoring network usage that would tell me the client's IP-address, the port/service he/she used? What I don't want I'm just guessing that someone will suggest i use nagios, munin, zabbix, cacti, mrtg - I've also looked at those, but a graph over network usage will not give me the answers I'm looking for. :-) Almost there I've already looked at a lot of monitoring solutions, and I've tried [ntop][http://www.ntop.org/], [darkstat][http://unix4lyfe.org/darkstat/] and others. Darkstat just didn't give me the answers. Although it listed a lot of statistics, and i could list the clients - it doesn't show me the network usage for a particular period. Ntop is by far the best I've seen so far - but i think it mostly shows current network usage, not the historical part. I could run apt-get upgrade and download a whole bunch of software, but not see it in the log afterwards.

    Read the article

  • Unable to force Debian to do unattended install... libc6 wants interactive confirm

    - by JD Long
    I'm trying to create a script that forces a Debian Lenny install to install the latest version of CRAN R. During the install it appears libc6 is upgraded and the install wants interactive confirm that it's OK to restart three services (mysql, exim4, cron). This process HAS to be unattended as it runs on Amazon's Elastic Map Reduce (EMR) machines. But I'm running out of options. Here's a few things I've tried: This previous question appears to be exactly what I'm looking for. So I set up my install script as follows: # set my CRAN repos... yes, I know there's a new convention where to put these. echo "deb http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list echo "deb-src http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list # set the dpkg.cfg options per the previous SuperUser question echo "force-confold" | sudo tee -a /etc/dpkg/dpkg.cfg echo "force-confdef" | sudo tee -a /etc/dpkg/dpkg.cfg export DEBIAN_FRONTEND=noninteractive # add key to keyring so it doesn't complain gpg --keyserver pgp.mit.edu --recv-key 381BA480 gpg -a --export 381BA480 > jranke_cran.asc sudo apt-key add jranke_cran.asc sudo apt-get update # install the latest R sudo apt-get install --yes --force-yes r-base But this script hangs with the following request for input: OK, so I tried stopping the services using the following script: sudo /etc/init.d/mysql stop sudo /etc/init.d/exim4 stop sudo /etc/init.d/cron stop sudo apt-get install --yes --force-yes libc6 This does not work and the interactive screen comes back, but this time with only cron listed as the service that must be restarted. So is there a way to make libc6 just restart these services with no user input? Or is there a way to stop cron so it does not cause an interactive prompt? Maybe a creative option I've never thought of? Keep in mind that this system is brought up, some Hadoop code is run, and then it's torn down. So I can put up with side effects and bad behavior that we might not want in a production desktop machine or web server.

    Read the article

  • Apache22 on FreeBSD - Starts, does not respond to requests

    - by NuclearDog
    Hey folks! I'm running Apache 2.2.17 with the peruser MPM on FreeBSD 8.2-RC1 on Amazon's EC2 (so it's XEN). It was installed from ports. My problem is that, although Apache is running, listening for, and accepting connections, it doesn't actually respond to any or show them in the log at all. If I telnet to the port it's listening on and type out an HTTP request: GET / HTTP/1.1 Host: asdfasdf And hit enter a couple of times, it just sits there... Nothing. No response requesting with a browser either. There doesn't appear to be anything helpful in the error log: [Sun Jan 09 16:56:24 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sun Jan 09 16:56:25 2011] [notice] Digest: generating secret for digest authentication ... [Sun Jan 09 16:56:25 2011] [notice] Digest: done [Sun Jan 09 16:56:25 2011] [notice] Apache/2.2.17 (FreeBSD) mod_ssl/2.2.17 The access log stays empty: root:/var/log# wc httpd-access.log 0 0 0 httpd-access.log root:/var/log# I've tried with accf_http and accf_data both enabled and disabled, and with both the stock configuration and my customized config. I also tried uninstalling apache22-peruser-mpm and just installing straight apache22... Still no luck. I tried removing all of the LoadModule lines from httpd.conf and just re-enabled the ones that were necessary to parse the config. Ended up with only the following loaded: root:/usr/local/etc/apache22# /usr/local/sbin/apachectl -M Loaded Modules: core_module (static) mpm_peruser_module (static) http_module (static) so_module (static) authz_host_module (shared) log_config_module (shared) alias_module (shared) Syntax OK root:/usr/local/etc/apache22# Same results. Apache is definitely what's listening on port 80: root:/usr/local/etc/apache22# sockstat -4 | grep httpd root httpd 43789 3 tcp4 6 *:80 *:* root httpd 43789 4 tcp4 *:* *:* root:/usr/local/etc/apache22# And I know it's not a firewall issue as there is nothing running locally, and connecting from the local box to 127.0.0.1:80 results in the same issue. Does anyone have any idea what's going on? Why it would be doing this? I've exhausted all of my debugging expertise. :/ Thanks for any suggestions!

    Read the article

  • How to work around blocked outbound hkp port for apt keys

    - by kief_morris
    I'm using Ubuntu 9.10, and need to add some apt repositories. Unfortunately, I get messages like this when running sudo apt-get update: W: GPG error: http://ppa.launchpad.net karmic Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 5A9BF3BB4E5E17B5 W: GPG error: http://ppa.launchpad.net karmic Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 1DABDBB4CEC06767 So, I need to install the keys for these repositories. Under 9.10 we now have the option to do this: sudo add-apt-repository ppa:nvidia-vdpau/ppa See this Ubuntu help article for details. This is great, except that I'm running this on a workstation behind a firewall which blocks outbound connections to pretty much all ports except those required by secretaries running Windows and IE. The port in question here is the hkp service, port 11371. There appear to be ways to manually download keys and install them on apt's keyring. There may even be a way to use add-apt-repository or wget or something to download a key from an alternative server making it available on port 80. However, I haven't yet found a concise set of steps for doing so. What I'm looking for is: How to find a public key for an apt-package (recommendations for resources which have these, and/or tips for searching. Searching for the key hash doesn't seem all that effective so far.) How to retrieve a key (can it be done automatically using gpg or add-apt-repository?) How to add a key to apt's keyring Thanks in advance.

    Read the article

  • Nginx and automatic updates

    - by Desmond Hume
    I'm on Ubuntu 12.04.1 with unattended-upgrades configured for automatic security updates, and I installed Nginx by first adding deb http://nginx.org/packages/ubuntu/ lucid nginx deb-src http://nginx.org/packages/ubuntu/ lucid nginx to /etc/apt/sources.list file, just as was suggested by the official wiki, and then by sudo apt-get update sudo apt-get install nginx which installed Nginx with all the standard modules. But now I think I could make good use of one or two of the Nginx optional modules, like the gzip precompression module or some security-related one. So far, I see two ways of adding an optional module to Nginx, one is compiling and installing from the source code and the other is described in this article. So, which of the ways should I choose so that automatic updates still run for and apply to Nginx and its optional modules? Or should I create a cron job with a command/script specific for Nginx instead of using unattended-upgrades utility? Can I choose between volume updates and security-only updates to be automatically applied to the standard and optional modules? And finally, is there a possibility to automatically update Nginx's modules on the fly (without any connections having been dropped), like the documentation suggests it's possible with sudo kill -USR2 $( cat /run/nginx.pid ) P.S. Actually I'm not certain if unattended-upgrades utility would automatically update the standard modules in the first place, not enough time has passed since Nginx was installed to say for sure.

    Read the article

  • Why is lighttpd and fastcgi keeping sending me the *.scgi file instead of the website content?

    - by e-satis
    I have the following config: server.modules = ( "mod_compress", "mod_access", "mod_alias", "mod_rewrite", "mod_redirect", "mod_secdownload", "mod_h264_streaming", "mod_flv_streaming", "mod_accesslog", "mod_auth", "mod_status", "mod_expire", "mod_fastcgi" ) [...] fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/tmp/lighttpd/php-fastcgi.socket" + var.PID, "max-procs" => 1, "kill-signal" => 9, "idle-timeout" => 10, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "200", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "/pyapps/essai/blondes.fcgi" => ( "main" => ( "socket" => "/var/tmp/lighttpd/django-fastcgi.socket", ), ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" ))) [...] $HTTP["host"] =~ "(^|www\.)cam\.com(\:[0-9]*)?$" { server.document-root = "/home/cam/web/" accesslog.filename = "/home/cam/log/access.log" server.errorlog = "/home/cam/log/error.log" server.follow-symlink = "enable" # files to check for if .../ is requested server.indexfiles = ( "index.php", "index.html", "index.htm", "index.rb") url.rewrite = ( "^(/blondes/.*)$" => "/pyapps/essai/blondes.fcgi$1" ) } I have the following dir tree: /home/tv/web/ `-- pyapps `-- essai `-- __init__.py `-- blondes.fcgi `-- blondes.pid `-- django-fcgi.py `-- manage.py `-- manage.pyo `-- plop `-- settings.py `-- urls.py No error when restarting lighthttpd. The I run: ./manage.py runfcgi method=prefork socket=/var/tmp/lighttpd/django-fastcgi.socket daemonize=false pidfile=blondes.pid No errors neither. I then go to http://cam.com/blondes/. I offers me to download an empty file. I checked permissions but everything is set to the same user and group, and they work for the PHP site. The file /var/tmp/lighttpd/django-fastcgi.socket exists. When I reload the page, I got no output in error logs, nor in the manage.py runfcgi command. I probably missed something obvious, but what ?

    Read the article

  • Website hosting from home - IIS6

    - by Paul
    I'm wanting to host a few websites from home, primarily because I'm using some BETA Microsoft software (.NET 4 and EF) and don't want to install it on my production server which is hosted at eukhost.com. Basically, I'm completely new to this sort of thing. So far, here is what I've done: Registered the domain name at namecheap.com (let's call it mydomain.com) Gone to "Nameserver Registration" in the panel and entered my IP address for the NS1 and NS2 records (let's say the IP is 0.0.0.0). Gone to "Domain Name Server Setup" and entered ns1.mydomain.com & ns2.mydomain.com Forwarded requests from port 80 to my internal IP (let's say 192.168.1.254) Created the website in IIS (I'm just testing with a single website so far, so have not created any host header values) Now, if I type in the IP address (http://0.0.0.0) I get the site as expected. However, if I enter http://www.mydomain.com I get an error saying "DNS Error - Cannot find server". I'm aware that there is a service from DynDNS that will automatically change the IP if I have a dynamic address, however my IP has remained static since I installed the ISP (since October) so I don't need this. Is there any way that I can get the DNS to work just by configuring IIS or something in Windows? I don't really want to have to pay for any 3rd party service. Thanks,

    Read the article

  • Connection refused in ssh tunnel to apache forward proxy setup

    - by arkascha
    I am trying to setup a private forward proxy in a small server. I mean to use it during a conference to tunnel my internet access through an ssh tunnel to the proxy server. So I created a virtual host inside apache-2.2 running the proxy, the proxy_http and the proxy_connect module. I use this configuration: <VirtualHost localhost:8080> ServerAdmin xxxxxxxxxxxxxxxxxxxx ServerName yyyyyyyyyyyyyyyyyyyy ErrorLog /var/log/apache2/proxy-error_log CustomLog /var/log/apache2/proxy-access_log combined <IfModule mod_proxy.c> ProxyRequests On <Proxy *> # deny access to all IP addresses except localhost Order deny,allow Deny from all Allow from 127.0.0.1 </Proxy> # The following is my preference. Your mileage may vary. ProxyVia Block ## allow SSL proxy AllowCONNECT 443 </IfModule> </VirtualHost> After restarting apache I create a tunnel from client to server: #> ssh -L8080:localhost:8080 <server address> and try to access the internet through that tunnel: #> links -http-proxy localhost:8080 http://www.linux.org I would expect to see the requested page. Instead a get a "connection refused" error. In the shell holding open the ssh tunnel I get this: channel 3: open failed: connect failed: Connection refused Anyone got an idea why this connection is refused ?

    Read the article

  • How to get rid of "Maxback Engine" for good?

    - by Jonik
    I used to have a Maxtor Shared Storage II network drive; it broke down long ago already. (Later I tried to recover some data from it, and partially succeeded, but haven't yet fully documented it on that question.) Anyway, I just noticed there are still some lingering bits remaining of the (thourougly crappy) software that came with the Maxtor device: a background process called "MaxBack Engine". I googled around a bit and found something related but not very useful: http://www.straitmac.com/jforum/posts/list/600.page http://discussions.apple.com/thread.jspa?threadID=725692 Under /Applications I found "Maxtor EasyManage.app" which I used to use for controlling the drive, and showed it some "rm -rf". Before deleting, I noted that the bundle did contain "MaxBack Engine.app" under Content/Resources. But still, after reboot, the "MaxBack Engine" process is back. I did notice though that it only appears when logging in with my usual user account; with another account it wasn't launched. So, dear Mac gurus, what could I do about this pest? I guess I could fall back to some Unix hackery and write a cronjob that kills any process with that name, but obviously it'd be nicer to be able to clean up from my computer everything left behind by Maxtor's piece of software.

    Read the article

  • Why Is Volume Shadow Copy Services stopping?

    - by David Mackintosh
    I am running Windows 7 Professional, 64-bit. I am running a backup-over-the-internet software client which depends on the Volume Shadow Copy Services running. Since I installed Service Pack 1 (or rather, didn't object when Windows Update forced Service Pack 1 on me) the backup service is failing to back everything up because VSC isn't running. Most of the time it fails to back up such noise as the Security Essentials database or the Messenger Live contact list -- stuff I really don't care about -- but I don't want to fall into the trap of accepting an Error-state backup as "normal". At the recommendation of the backup software, I have set the VSC service startup mode to be Automatic. When I look in the Event Log, System channel I can see at boot time: The Volume Shadow Copy service entered the running state. ...and then two or three minutes later: The Volume Shadow Copy service entered the stopped state. How do I figure out why VSC is stopping? At the suggestion of the backup vendor, I have already followed the suggestions from http://support.microsoft.com/default.aspx/kb/940184 net stop SENS net stop EventSystem net start EventSystem net start SENS net stop COMSysApp net stop SwPrv net stop VSS cd /d C:\Windows\system32 regsvr32 ole32.dll /s regsvr32 oleaut32.dll /s regsvr32 vss_ps.dll /s vssvc /register /s regsvr32 /i swprv.dll /s regsvr32 /i eventcls.dll /s regsvr32 es.dll /s regsvr32 stdprov.dll /s regsvr32 vssui.dll /s regsvr32 msxml.dll /s regsvr32 msxml3.dll /s regsvr32 msxml4.dll /s net start SwPrv net start VSS net start ProtectedStorage ...and per http://support.microsoft.com/kb/940184 I have deleted the key tree HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\EventSystem\{26c409cc-ae86-11d1-b616-00805fc79216}\Subscriptions I have also run chkdsk /F and chkdsk /R on both permanent hard disks. (I had a similar problem with another computer (same OS, same failure, same start point after SP1 install) but the problem went away when I forced Volume Shadow Copy Services to Automatic startup rather than Manual. I did not have to resort to following the Microsoft KB instructions.)

    Read the article

  • Whats the best way to update Ubuntu 9.04?

    - by Fu86
    I have a Ubuntu 9.04 server which has no packase support anymore. If I want to update my package lists, I get th following errors: Err http://de.archive.ubuntu.com jaunty-security/multiverse Packages 404 Not Found [IP: 141.30.13.10 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/jaunty/main/binary-amd64/Packages 404 Not Found [IP: 141.30.13.10 80] .... I read at the official Ubuntu-Support-Page, that there is a update-manager-core-Package to upgrade to a new release. Unfortunately I dont have this package installed and I am unable to install it because of the lack of package sources. EDIT: Installing the package update-manager-core from another release doesn't work because it depends on a higher version of python-apt. (Tried with 10.04) $ dpkg -i update-manager-core_0.134.7_amd64.deb Selecting previously deselected package update-manager-core. (Reading database ... 28743 files and directories currently installed.) Unpacking update-manager-core (from update-manager-core_0.134.7_amd64.deb) ... dpkg: dependency problems prevent configuration of update-manager-core: update-manager-core depends on python-apt (>= 0.7.13.4ubuntu3); however: Version of python-apt on system is 0.7.9~exp2ubuntu10. update-manager-core depends on python-gnupginterface; however: Package python-gnupginterface is not installed. dpkg: error processing update-manager-core (--install): dependency problems - leaving unconfigured Errors were encountered while processing: update-manager-core So, whats the best way to upgrade to to current Release without reinstalling the complete (virtual) server?

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • SQL Server 2008 R2 Error 15401 when trying to add a domain user

    - by Alice
    I am trying to add a domain user. I am doing the following. Expand Security Right click on Logins Select New Login... Login name select search Click on location and select entire directory Type username Click checkname The name goes underlined and add some more info Click OK Click OK I then get the following error: I have found http://support.microsoft.com/kb/324321. The Login does exist There is no Duplicate security identifiers Authentication failure I don't think is happening as I can browse AD Case sensitivity should not be the problem as I am doing the checkname and it is correcting it. Not a Local account Name resolution again I can see the AD I have rebooted the server (VM) and the issue is still happening. Any ideas? Edit I have also: Domain member: Digitally encrypt secure channel data (when possible) – Disable this policy Domain member: Digitally sign secure channel data (when possible) – Disable this policy Rebooted server http://talksql.blogspot.com/2009/10/windows-nt-user-or-group-domainuser-not.html Edit 2 I have also: Digitally encrypt or sign secure channel data (always)- Disabled Rebooted Edit 3 Since the question have moved site I no longer haves access to comment etc... I have checked the dns on the server to a machine where it is working. The DNS servers are the same on both...

    Read the article

  • Router block some sites

    - by Mahesha999
    Hi I was using ADSL Modem/Router earlier. The device is quite old Pronet PN-ADSL 101 E/U model (pics: http://bit.ly/P2YaWy, http://bit.ly/OA700l) Since it had only one RJ45 out, I bought new Wireless Router TPLink TL-WR941ND. It has 4 RJ45 out and 3 wireless antennas. I configured my old router in bridge. Now, if I have to connect my pc to Internet through the old router, I have to enter username and password. Then I connected the RJ45 output of old router to the WAN in of new router. and ran the CD of new router. It configured the new router in PPPoE by saving the username and password in router to dial automatically. So now I have to just plug in the wires in my new routers any RJ45 out. I am able to access the Internet when I connect through new router (both wired and wirelessly), but some sites are getting blocked. Most notably yahoo.com (though ymail.com is working), Microsoft.com. msn.com. These sites work perfectly fine when I connect my pc directly to my old router and enter username and password manually. (However others like google.com. facebook.com works fine when connect through new router) So here these some sites need some parameter set but I am unable to find them out. Can anyone help me. My friend said he also faced same problem. Surprisingly he advised me to see if the same websites will work through Opera turbo mode and boom they worked. So what could be the problem?

    Read the article

< Previous Page | 792 793 794 795 796 797 798 799 800 801 802 803  | Next Page >