Search Results

Search found 12546 results on 502 pages for 'aidan host'.

Page 150/502 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • tracd multiple projects+nginx reverse proxy

    - by Xeross
    I am trying to setup nginx with a reverse proxy to tracd, however I only want to use 1 tracd. Now first here's my config for this domain server { listen 80; server_name bugs.XXXXXXXX.com; access_log /var/log/nginx/XXXXXXXX-bugtracker.access.log proxy; location / { rewrite ^/bugtracker/(.*)$ /$1; rewrite ^/bugtracker$ /; proxy_pass http://127.0.0.1:81/bugtracker/; proxy_redirect default; proxy_set_header Host $host; } location ~ /\.ht { deny all; } } As you can see there's the rewrite rules, because for some reason all the urls that tracd spews out are like /bugtracker/something. Now this is indeed caused by tracd just sending urls like it normally should however trac is at bugs.XXXXXXXX.com/ and not at bugs.XXXXXXXX.com/bugtracker. So how can I make tracd/trac display the (In this case) correct urls ?

    Read the article

  • Rsync over ssh: "ERROR: module is read only" suddenly appeared

    - by user978548
    I've used from some time rsync/ssh to backup my shared host contents to my personal Synology NAS (212j for that matter), and it worked quite well. For information, I use a password-less ssh connection. 3 days ago, I updated my NAS software and since (or at least I believe it's since that), the backup won't work anymore. I get the following error on the host: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) ERROR: module is read only ..which I do not understand. beside that nothing changed that I know of in both source and destination that can be related to rsync or ssh, I did check a few things and all seems to be alright: I can still connect through ssh from the host to my NAS with the good user, so ssh stuff like keys haven't changed. I also have the correct file permissions on the NAS (I checked, and also tried to create files, directories, .. with the user used by rsync through ssh). I read here and there that the error means that I have to ensure that my rsyncd.conf have the right read only = no in it, but as far as I know, I never used rsyncd as well as I never configured anything for it and until now it worked like a charm.. I use the following command to do the backup: rsync -ab --recursive \ --files-from="$FILES_FROM" \ --backup-dir=backup_$SUFFIX \ --delete \ --filter='protect backup_*' \ $WDIRECTORY/ \ remote_backup:$REMOTE_BACKUP/ So I'm stuck and really can't figure out what happened. Edit: As suggested in comments, I also tried passing commands to ssh (but not from inside a ssh session), that worked as expected, and also tried a single rsync command, which didnt worked, failing just like the complete backup command. (sharedHost):hostuser:~ > touch test.txt (sharedHost):hostuser:~ > rsync test.txt remote_backup:backups/test.txt ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.8] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.7] and (sharedHost):hostuser:~ > ssh remote_backup 'touch /abs_path_to_backups/backups/test2.txt && echo "ProoF" > /abs_path_to_backups/backups/test2.txt' (sharedHost):hostuser:~ > ssh remote_backup 'cat /abs_path_to_backups/backups/test2.txt' ProoF

    Read the article

  • set tap0 using virt-manager for bridged wireless

    - by DaveO
    After 3 days I finally have kvm guests working on the network via wireless (link below - thanks!): My network is 192.168.1.0/24 on the host: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" sudo tunctl -t tap0 sudo ip link set tap0 up sudo ip addr add 192.168.1.25/24 dev tap0 sudo route add -host 192.168.1.30 dev tap0 sudo parprouted wlan0 tap0 on the guest: auto eth0 iface eth0 inet static address 192.168.1.30 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.25 and start the guest: sudo kvm /path/to/guest.img -net nic,macaddr=DE:AD:BE:EF:90:26 -net tap,ifname=tap0,script=no This works great and I can ping the local network and the internet back and forth between the guest. But how do I add these settings to the guest's xml config so I can start the guest via virt-manager with the same nic settings? ref: http://www.linuxquestions.org/questions/debian-26/kvm-wireless-bridge-network-691953/

    Read the article

  • Image Magick and Ghostscript

    - by user114671
    I mainly do web design but I host a few client sites on a Centos 5 VPS. A new client has asked me to host their site and I've been given the following configuration requirement: Apache 2.2.3 PHP 5.2.17 MySQL 5.0.77 Image Magick 6.5.1-0 (not as an Apache module) Ghostscript 8.7 Checking php_info() I have: Apache 2.2.3 PHP 5.2.14 MySQL 5.0.90 I don't have IM or GS listed. I expect that my versions of PHP and MySQL are similar enough to work, but how do I get my server set up to work with this client's site as well?

    Read the article

  • What emulator / VM software can I use to create a Win32-portable Linux Guest?

    - by Jotham
    Hi, I want to create a portable VM setup so that I can boot a Linux install regardless of which Windows XP / Windows 7 host machine I am on. I was looking at Qemu but it doesn't appear to have a relatively safe win32 build. Other things like VirtualBox require complete install on the host OS for performance reasons. I'm not so concerned about performance, I just want to run a few curses based applications. My ideal end goal would be a a memory stick of some size with a VM/Emulator I can boot on most WinXP/Windows 7 machines and access my own curses based applications (probably archlinux or debian). Any help would be appreciated. Regards,

    Read the article

  • Nginx Proxying to Multiple IP Addresses for CMS' Website Preview

    - by Matthew Borgman
    First-time poster, so bear with me. I'm relatively new to Nginx, but have managed to figure out what I've needed... until now. Nginx v1.0.15 is proxying to PHP-FPM v.5.3.10, which is listening at http://127.0.0.1:9000. [Knock on wood] everything has been running smoothly in terms of hosting our CMS and many websites. Now, we've developed our CMS and configured Nginx such that each supported website has a preview URL (e.g. http://[WebsiteID].ourcms.com/) where the site can be, you guessed it, previewed in those situations where DNS doesn't yet resolve to our server, etc. Specifically, we use Nginx's Map module (http://wiki.nginx.org/HttpMapModule) and a regular expression in the server_name of the CMS' server{ } block to 1) lookup a website's primary domain name from its preview URL and then 2) forward the request to the "matched" primary domain. The corresponding Nginx configuration: map $host $h { 123.ourcms.com www.example1.com; 456.ourcms.com www.example2.com; 789.ourcms.com www.example3.com; } and server { listen [OurCMSIPAddress]:80; listen [OurCMSIPAddress]:443 ssl; root /var/www/ourcms.com; server_name ~^(.*)\.ourcms\.com$; ssl_certificate /etc/nginx/conf.d/ourcms.com.chained.crt; ssl_certificate_key /etc/nginx/conf.d/ourcms.com.key; location / { proxy_pass http://127.0.0.1/; proxy_set_header Host $h; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } (Note: I do realize that the regex in the server_name should be "tighter" for security reasons and match only the format of the website ID (i.e. a UUID in our case).) This configuration works for 99% of our sites... except those that have a dedicated IP address for an installed SSL certificate. A "502 Bad Gateway" is returned for these and I'm unsure as to why. This is how I think the current configuration works for any requests that match the regex (e.g. http://123.ourcms.com/): Nginx looks up the website's primary domain from the mapping, and as a result of the proxy_pass http://127.0.0.1 directive, passes the request back to Nginx itself, which since the proxied request has a hostname corresponding to the website's primary domain name, via the proxy_set_header Host $h directive, Nginx handles the request as if it was as direct request for that hostname. Please correct me if I'm wrong in this understanding. Should I be proxying to those website's dedicated IP addresses? I tried this, but it didn't seem to work? Is there a setting in the Proxy module that I'm missing? Thanks for the help. MB

    Read the article

  • How to specify a different ip address in virtual box guest os

    - by Nrew
    I am using Windows 7 as the host. And xp as guest. I've already check out this site: http://forums.virtualbox.org/viewtopic.php?f=3&t=17232 But the info is not complete. What do I need to set here, so that the guest would have another IP Address but can still connect to the internet. Because what I'm trying to accomplish here is to be able to try Team Viewer or Cross loop. With the host os and guest os. Because I only have one computer.

    Read the article

  • WordPress 3.5 Multisite and nginx siteurl issues

    - by Florin Gogianu
    I'm setting up multisite on localhost in subdirectories. The problem is that when I'm trying to access the dashboard of a site I just created ( localhost/wptest/site/wp-admin ) I get "This webpage has a redirect loop" and when I try to access the actual website ( localhost/wptest/site ) the page loads but without assets, such as css. When I access the network dashboard, or the primary site dashboard on localhost/wptest everything is just fine. Also when I edit the permalink of the second site in the network dashboard, to be like this: localhost/site it also runs fine. How to make it work with the default permalink structure localhost/wptest/site? The wordpress files are in /usr/share/html/wptest The wp-config.php is as follows: define('WP_ALLOW_MULTISITE', true); define('MULTISITE', true); define('SUBDOMAIN_INSTALL', false); define('DOMAIN_CURRENT_SITE', 'localhost'); define('PATH_CURRENT_SITE', '/wptest/'); define('SITE_ID_CURRENT_SITE', 1); define('BLOG_ID_CURRENT_SITE', 1); And the server block / virtual host is like this: server { ##DM - uncomment following line for domain mapping listen 80 default_server; #server_name example.com *.example.com ; ##DM - uncomment following line for domain mapping #server_name_in_redirect off; access_log /var/log/nginx/example.com.access.log; error_log /var/log/nginx/example.com.error.log; root /usr/share/nginx/html/wptest; index index.html index.htm index.php; if (!-e $request_filename) { rewrite /wp-admin$ $scheme://$host$uri/ permanent; rewrite ^(/[^/]+)?(/wp-.*) $2 last; rewrite ^(/[^/]+)?(/.*\.php) $2 last; } location / { try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location = /robots.txt { access_log off; log_not_found off; } location ~ /\. { deny all; access_log off; log_not_found off; } } And finally here's an error log: 2013/06/29 08:05:37 [error] 4056#0: *52 rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 127.0.0.1, server: example.com, request: "GET /nginx HTTP/1.1", host: "localhost"

    Read the article

  • Configuring BIND to use VM's DNS for specific domain

    - by Srirangan
    I work on a project for which I use an Ubuntu server vm on an Ubuntu host. The VM runs all the services / webapps through haproxy and nginx and serves it on the domain (xyz.com). I manually modify my resolv.conf to use the VMs IP address as the nameserver and I can run my app on the host browser. The problem is I am modifying an auto-generated file (resolv.conf) and I need to do it each time. Is there a smart way to say: -- are you accessing xyz.com? -- if yes use VM's DNS server, else use the hosts

    Read the article

  • Virtual firewall to protect hypervisor

    - by manutenfruits
    I am running an Ubuntu Server 12.10 as a single host connected to a NATed router connected using PPPoE to a optical fiber modem. This server is meant to be accessed from the Internet, but also to be used from the LAN as a SVN, MySQL and what not... The issue is that the router is not customizable enough to serve, so I was thinking about creating a virtual pfSense firewall using KVM inside of the server itself, removing the need of the router. Is this possible? Can the host ignore and block all traffic coming to itself, but not for the firewall? I am aware this is not the most desirable environment, I accept suggestions based on budget!

    Read the article

  • adding remote ssh printer as local printer

    - by guest
    I have SSH access to a remote host (FreeBSD) that has a printer set up. I do not have root access on that host or any other special user rights. Now I want to print directly from my laptop on that printer (Ubuntu 10.10). The problem is that I don't know how to "import" or whatever the the printer, as it needs authetification from my user account (print quota limitations). E-mailing me the files I want to print or scp them every time is a pain, ATM I pipe the PostScript output manually to a ssh command, but that's also a huge working overhead. E.g. when I want to print a foo.pdf pdftops '/path/to/foo.pdf' - | ssh user@remotehost 'lpr -P printername' So, does anyone know of a smooth way to shorten this procedure? Ideally I would just want to use a printername instead of the whole ssh command

    Read the article

  • Remote kill, upload, execute file

    - by Masoud M.
    I'm developing a program and I need to upload my xyz.exe file to many host machines and execute them frequently. I need a server-client tool to do it as below steps after an update signal from my PC: Those host machine should kill any running processes with name xyz.exe. Download my new xyz.exe. Then execute new xyz.exe. I know about some tools like PsExec, but I need a tools with better user-interface and more powerful. Is there any tool to do it ? UPDATE: The systems are in a same LAN, OS is windows (XP or 7), No full remote access is needed. I'm a developer and my program should run in remote hosts and I'm testing my application.

    Read the article

  • mount dev, proc, sys in a chroot environment?

    - by Patrick
    I'm trying to create a Linux image with custom picked packages. I followed the guide here http://www.olpcnews.com/forum/index.php?topic=4766.0 However, when I tried to install some packages, it failed to configure due to missing the proc, sys, dev directories. So, I learned from other places that I need to "mount" the host proc, ... directories to my chroot environment. Though, I saw two syntax and am not sure which one to use. In host machine: mount --bind /proc <chroot dir>/proc and another syntax (in chroot envrionment): mount -t proc none /proc Which one should I use, and what are the difference? Edit: What I'm trying to do is to hand craft the packages I'm going to use on an XO laptop, because compiling packages takes really long time on the real XO hardware, if I can build all the packages I need and just flash the image to the XO, I can save time and space.

    Read the article

  • Accessing Windows from Linux/Mac by name using TCP/IP

    - by stevekuo
    What are some solutions to access Windows by its computer name from Linux and Mac using TCP/IP. That is, from terminal I want to be able to ping my Windows PCs using its host name. My setup is: Various machines running Ubuntu, Windows XP and OS X. Networked using a consumer grade wireless router which provides DHCP. The only DNS is the ISP's, which resolves Internet names and not local host names. The Windows machines can ping each other by name. The Ubuntu and OS X machines can only ping Windows by IP address (name doesn't work).

    Read the article

  • I want to start my portfolio site using ASP.Net and I'm a bit lost about hwo to actually put it on t

    - by Papuccino1
    I found this site: www.discountasp.net They seem cheap enough and have a track record. I decided to host my site with them. Here's where I'm confused. I host the application (my website) with them and they give me an IP address, right? Users can visit my site by typing in that IP address right? (Of course once I move the index file and create a defauly web folder, etc.) Next step is buying a domain name right? Like www.mysite.com, right? Is this the way it's done, or am I doing it wrong?

    Read the article

  • Virtual environment firewall with CSF + iptables rules on VM?

    - by luison
    We are getting into virtualization with a Proxmox VE (OpenVZ + KVM) server. Our plan for firewall is to have CSF (http://configserver.com/cp/csf.html) running on the host machine as we've had a reasonable good experience with it in the past. Apart from that we plan simple firewall rules on the VM machines (mostly OpenVZ containers with same kernel) and maybe fail2ban simple specific rules. I would appreciate comments with anyone with similar experiences? I understand all traffic comes via the host machine so a combined firewall there with specific firewalling on the VM should work, alltough some iptables rules are hard to get to work on OpenVZ containers.

    Read the article

  • What's required to enable communication between two IP ranges located behind one switch?

    - by Eric3
    Within our co-located networking closet, we have control over two ranges of 254 addresses, e.g. 64.123.45.0/24 and 65.234.56.0/24. The problem is, if a host has only one IP address, or a block of addresses in only one range, it can't contact any of the addresses in the other subnet. All of our hosts use our hosting provider's respective gateway, e.g. 64.123.45.1 or 65.234.56.1 A host on the 64.123.45.0/24 range can contact the 65.234.56.1 gateway and vice-versa Everything in our closet is connected to an HP ProCurve 2810 (a Layer 2-only switch), which connects through a Juniper NetScreen-25 firewall to the outside world What can I do to enable communication between the two ranges? Is there some settings I can change, or do I need better networking equipment?

    Read the article

  • Sending emails with remote mail server in ASP.NET blocked by Windows firewall?

    - by Dave
    I want to migrate a web application from a Windows Server 2003 to a Windows Server 2008 R2. All works fine except sending emails from the application. If I configure the application to use the smtp server on "localhost" it works, but changing it to the "real" host name (e.g. mail.example.org) no mail is sent. The error message says, that the remote server needs a secure connection or smtp authentication. But since it works when using "localhost" instead of the host name I doubt that this is the problem. Also it's unlikely a problem with the mail server, I also tried it with another one. So for me it seems like the firewall is blocking the outgoing connection to the mail server. I tried to open port 25, but it still did not work. Maybe I just did it the wrong way.

    Read the article

  • Varnish VCL not allowing two separate IP addresses as backends

    - by Peter Griffin
    Every time I attempt to add an extra back end into our VCL file, it's fails. Here is the DAEMON_OPTS we are running off: DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/custom.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -s malloc,10G" And here is the offending backend(s) backend default { .host = "114.123.456.789"; .port = "8080"; } backend alt { .host = "203.123.456.789"; .port = "80"; } Any Ideas ? Gut feeling is it might need the backends to be set somewhere, but I'm not sure where.

    Read the article

  • DNS: domain2 points to domain1

    - by Yar
    I have one domain ("domain1") that is set up with hosting and mail (hosted by Gmail Apps). This domain works perfectly. I want a second domain ("domain2") to forward to domain1, but I don't want to use "DNS Forwarding." I would like to have it act EXACTLY like domain1, so that domain2/whatever points to the same resource as domain1/whatever WITHOUT AN HTTP REDIRECT NOR BROWSER TRICKS LIKE FRAMES. I would also love to be able to send mail to "blah@domain2" and have it go to "blah@domain1". Can this be set up, and how? I am using GoDaddy as registrar and DNS host for both domains. GoDaddy is also the web host for domain1, and mail hosting is with Google Apps.

    Read the article

  • How to use public-key ssh authentication

    - by Poma
    I have 2 ubuntu 12.04 (beta) servers (node1 and node2) and want to establish passwordless root access between them. Other users should not have access to other boxes. Also note that ssh default port is changed to 220. Here's what I did: sudo -i cd /root/.ssh ssh-keygen -t rsa # with default name and empty password cat id_rsa.pub > authorized_keys then copied id_rsa & id_rsa.pub to node2 and added id_rsa.pub to authorized_keys. Both hosts have the same /root/.ssh/config file: Host node1 Hostname 1.2.3.4 Port 220 IdentityFile /root/.ssh/id_rsa Host node2 Hostname 5.6.7.8 Port 220 IdentityFile /root/.ssh/id_rsa Now the problem is that when I type ssh node2 it asks me for password. What may be the problem?

    Read the article

  • adding a route entry to linux routing table

    - by netg
    hi, I have two systems with ip address say 64.103.56.1(A)(Dev name -wlan0) and 64.103.225.18(B),now what i want is , everytime I ping B from my system A, it has to be routed via a router say with address 10.0.0.251(C)(I want this to be my next hop to reach B) , but this router is on a different subnetwork than the two systems.How do I do this? /* Things I tried: I used 'route add -host B gw C wlan0', and got an error saying " no such process exist or no such device found". Tried ping C and traceroute and found the gw addr at my side is some 63.103.236.3(D), so added another entry route add -host C gw D wlan0, I was able to do this without any error! */

    Read the article

  • ldirectord refusing connection when nginx redirects from http to https

    - by Adam
    I am running ldirector as a load balancer to an nginx front end server. If I setup a redirect from http to https and connect directly to the nginx server, all is well. Connecting via ldirector causes my connection to be refused. I can connect normally via http or https through ldirector when I don't have the redirect in place. To add to my confusion, if my application issues a redirect from http to https, it works. I am testing this via curl on the command line. (curl: (7) couldn't connect to host vs a response) I am using the standard ldirectord config (http://www.ultramonkey.org/3/topologies/config/lb/non-fwmark/linux-director/ldirectord.cf) the http and https parts. My nginx config for the redirect is simply: location / { rewrite ^(.*) https://$host$1 permanent; }

    Read the article

  • Can we put random URL entries on DNS

    - by ring bearer
    Using microsoft DNS All/most of our local hosts ( with in ) are in following domain *.company.org So a host name will look like mymachine001.company.org Is it possible to set up wild card DNS entries of the form ? *.subd.company.com Note: The URL ends with .com, all other hosts so far ever set up in the DNS were of the format *.company.org what i am trying to achieve is the following. A user with in internal network types a url http://someprefix.subd.company.com in browser and enters. Since there is a wild card entry in DNS, the user gets routed to host mapped to *.subd.company.com in the DNS Note : at the same time, company.com has a public DNS entry and that is mapped to a physical IP in some other network (data center)

    Read the article

  • CLI-Based monitoring tool for KVM

    - by Pinnacle
    I am developing a scheduler for running VMs on KVM. The scheduling has over-commitment of resources like memory and CPU. For this, I need a CLI-based monitoring tool that keeps me giving information about the resource usage of each VM, because it might be the case that due to over-provisioning of resources, VMs on a particular host are running very slowly depending on the benchmarks/programs each VM is running, and then I need to migrate a VM to another host and so on. I looked into libvirt-based tools like collects, MUNIN, Nagios-vert, etc.( http://libvirt.org/apps.html#monitoring ) I also looked into Ubuntu utility perf-kvm ( http://manpages.ubuntu.com/manpages/maverick/man1/perf-kvm.1.html ) I want to ask which CLI-based would be recommended by the community so that I can make a automated scheduler that takes care of the above situation.

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >