Search Results

Search found 12484 results on 500 pages for 'host'.

Page 425/500 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • How to make lighttpd respect X-Forwarded-Proto when constructing redirects for directories?

    - by Tim Landscheidt
    We have an nginx proxy at tools.wmflabs.org that receives requests by http and https and passes them by http on to lighttpds on a grid (one lighttpd per top-level path). Requests that reach the proxy by https are received by the lighttpds like this: HEAD /lighttpd-test/test HTTP/1.1 Connection: close Host: tools.wmflabs.org X-Forwarded-Proto: https X-Original-URI: /lighttpd-test/test User-Agent: curl/7.29.0 Accept: */* This works great except in the case where the URL references a physical directory and misses the trailing slash ("/"), as lighttpd then generates a redirect to the http URL: HTTP/1.1 301 Moved Permanently Location: http://tools.wmflabs.org/lighttpd-test/test/ Connection: close Date: Fri, 06 Jun 2014 14:50:29 GMT Server: lighttpd/1.4.28 The relevant parts of our lighttpd configurations are: server.modules = ( "mod_setenv", "mod_access", "mod_accesslog", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_cgi", ) server.port = $port [...] server.document-root = "$home/public_html" [...] server.follow-symlink = "enable" [...] server.stat-cache-engine = "fam" ssl.engine = "disable" alias.url = ( "/$tool" => "$home/public_html/" ) index-file.names = ( "index.php", "index.html", "index.htm" ) dir-listing.encoding = "utf-8" server.dir-listing = "disable" url.access-deny = ( "~", ".inc" ) [...] How can I make lighttpd respect X-Forwarded-Proto and use it when constructing redirects for directories? I'm aware that I could try to tackle this in nginx, but I'd prefer if I can fix it in lighttpd.

    Read the article

  • Is my current htaccess setting hurting SEO?

    - by user656002
    I have a site that I have redirecting to https. I do this to leverage wildcard SSL for my password protected pages. Everything seems to work fine with testing. For example, whether you type in http or www, you always get redirected to the SSL https... That said, I have about 200-300 external backlinks -- many high quality, yet google webmaster (along with SEOMoz), shows I have just 4... Huh? I'm embarrassed to say I just discovered this. This has led me to hypothesize that maybe my settings in htaccess is messed up, so google isn't recognizing a link because it's recorded on another site as http, instead of https. Maybe? At any rate, here is my simple htaccess setting for 301 www to http (The https redirect must be done inside the virtual host file--I think). I don't have anything in the htaccess file for https RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] Like I said, everything works fine for redirect over https, so I'd rather not screw up what works. On the other hand something is very wrong with google finding all my back links, so I need to fix something... I'm just wondering that maybe google isn't picking up a my backlinks from other websites recording me as http because I'm at https. Maybe google doesn't care and it's some other issue. Am I barking up the right tree? If so any quick fixes? Thanks as always!

    Read the article

  • please explain these mongo statistics

    - by sivann
    My setup: I have 2 hosts, and 2 shards each. Host1 has 2 shards, and is the master of the replicas host2 has the secondaries of the 2 shards. . host1: shard1 (repset1),shard2 (repset2) host2: shard1 (repset1),shard2 (repset2) There's also a 3rd host that acts as arbitrer. I have 50 threads writing randomly to both shards (using a hash) via mongos with REPLICA_SAFE WriteConcern set on each insert. The questions: mongostat displays about 90% locked for both shards in host1 and about 1% locked on host2. Since I use REPLICA_SAFE which supposedly writes to both servers shouldn't the locks be the same? mongostat reports qr=30 for both shards of host1, and qw=0 always. Since I perform only writes how is this possible? Moreover on host2 all queues are reported 0. Faults are abut the same in all shards/hosts (arround 80). netIn/netOut on the secondaries (host2) are always about 200bytes/sec. Too low. mongos has 53 connections, host1's shards have 71 and 71 and host2's shards have 9 and 8. How is this? Please answer whatever you can. Thanks!

    Read the article

  • No Network Connection in WinXP image from Microsoft running on VirtualBox 3.1.6 OSE (Ubuntu 10.04) due to missing CD Rom

    - by Bevor
    I'd like to test local websites in IE7 and IE8.To do that I thought about using the free Microsoft images: http://www.microsoft.com/windowsxp/using/networking/setup/default.mspx I converted the VHDs to VDIs to make them run in VirtualBox. ( http://www.qc4blog.com/?p=721 ) This works fine. The problem is that in this Windows XP installation there is no Network Adapter configured. Actually nothing at all is configured because it needs the Windows XP CD Rom to do that. If I would have a Windows XP CD Rom, I would not need to run the Microsoft image, so is there some kind of workaround to get an internet connection? Meanwhile I set "bridged" in VirtualBox. But this doesn't help because "ipconfig /all" in the guest system doesn't show any data because nothing is configured. How can I get a connection to my local Apache (Host system). http://localhost would be enough. By the way: I can't install the "Guest additions". When I do that, the 3 days trial period of the guest system is suddenly gone, so I can't use it anymore and it is senseless. Any ideas? Update: I've tried the Vista image and it gets an internet connection. From Vista image I can get to my site with 192.168.1.3/mywebsite in the browser url. So actually I don't care about the WinXP issue anymore but I would be glad if anyone still knows a solution.

    Read the article

  • Passenger and ServerAlias not cooperating

    - by Pyzo
    I have a ruby application that runs on a server with multiple IP addresses and mutliple vhosts. Here is the configuration of the problematic virtual host: <VirtualHost 10.0.0.10:80> ServerName realname.example.com ServerAlias alias.example.com DocumentRoot /var/www/sites/example/current/public <Directory /var/www/sites/example/current/public> AllowOverride all Options -MultiViews </Directory> ErrorLog /var/log/httpd/example_error_log CustomLog /var/log/httpd/example_access_log common RailsEnv production RackEnv production </VirtualHost> When I pull up realname.example.com the Ruby on Rails application works correctly. On the other hand alias.example.com just gives me Not Found: / I'm fairly certain the correct vhost is getting used because alias.example.com produces a 404 in the correct log file. I've tried adding logging to the Passenger config and it seems to indicate that Passenger is getting the request. Note: I can't redirect alias.example.com to realname.example.com. realname is accessed using a CDN, whereas alias is directly accessed. Anyone have any ideas why this isn't working? I've been banging my head for days and I've got a similar configuration in QA that works as expected.

    Read the article

  • Recovering a broken NTFS filesystem?

    - by OverTheRainbow
    A much-needed Windows Update broke a Vista laptop that was running fine until then: After booting up, Windows displays "Please wait..." but it never goes anywhere. I waited for a couple of hours, there is a bit of disk activity, but it didn't work out in the end. I booted with the Vista DVD, chose "Repair your computer" which said that there was nothing wrong :-/ Next, I booted it up with a Linux USB keydrive, and ran Gparted 0.8.1 (which includes ntfsresize v2011.4.12AR.4 libntfs-3g) which displays a bunch of warnings for the NTFS partition where the Vista system is located such as: ntfs_mst_post_read_fixup: magic: 0x00000000 size: 1024 usa_ofs: 0 usa_count: 65535: Invalid argument Record 16 has no FILE magic (0x0) Next, I ran ntfsfix /dev/sda2, which said: Mounting volume... OK Processing of $MFT and $MFTMirr completed successfully. NTFS volume version is 3.1. NTFS partition /dev/sda2 was processed successfully. Next, I rebooted Vista, which did a CHKDSK, before rebooting. But I'm still getting nowhere with "Please wait..." Before I copy the user's data to another host and reinstall Vista from a DVD, does someone know what I could try? Thank you. Edit: In case someone else has the same issue... After the BIOS, hit F8 and choose "Repair your computer", followed by "Toshiba HDD Recovery". In addition to a 1,5GB partition labelled "WinRE", the hard disk contains a second partition labeled "Data" from which the application will fetch a system image and reinstall it in the "Vista" partition. Make sure you copy your data out of the system partition before doing this.

    Read the article

  • PHP Web Server Solution (Apache/IIS)

    - by njk
    I apologize if this is too broad or belongs on Super User (please vote to move if it does). I'm in the process of creating requirements for an internal PHP web server to submit to our architecture team and would like to get some insight whether to use a Windows or *nix platform and what applications would be required. The server will host a small PHP application that will be connecting to SQL Server. The application will need to send mail. We would also like to incorporate a FTP server to allow files to be dropped in. From what I've read regarding a Windows platform using IIS, it seems as though IIS would only be advantageous if using a .NET or ASP application. Does IIS have mail functionality? Or how is mail traditionally configured (esp. on *nix)? Also, does IIS have directory configuration functionality like Apache does with .htaccess? For a Windows based solution; IIS (comes with FTP) Apache (has mod_ftp module) For a *nix based solution; Apache

    Read the article

  • Static DHCP binding

    - by Alex
    Good time of day, SF people. I have created a manual DHCP binding entry on a Cisco router so that a client would always get leased to it. The clients wants to get the same address on both of his dual-boot linux systems. He tries to get an IP address leased and he succeeds on one of the dual-boot operating systems. When he reboots to another one he gets a lease for a completely different one. I don't get it. The MAC addresses are the same (we checked in ifconfig, so what could be happening here? Why is the router confused? Or is it something else? Also, how can I check DHCP server IP address who I have got an IP address from (on Linux)? Configuration on Cisco: ip dhcp pool MANUAL_BINDING0001 host 192.168.0.64 255.255.255.0 hardware-address dead.beef.1337 dns-server 192.168.8.11 default-router 192.168.0.254 domain-name verynicedomainigothere.cn PS. Is it mandatory to use client-name configuration line?

    Read the article

  • Hyper-V snapshots – unable to start VM

    - by ahmedz
    I restarted my Host server after shutting down three guest VMs. After I restarted the machine I tried to start the VMs and got an error stating the the VM failed to start. SERVERNAME failed to start. Attachment 'avhd file path' is read only. Please provide read/write access to the attachment. Error: 'General access denied error' SERVENAME failed to start. (virtual machine ID 17292200-wd22-dd22-d23-dddddd2222) The issue seems to be with the disk space. The VHD file for this VM is 128 GB and there are two AVHD files of 58 and 75 GB. Whereas the total disk space on this drive (E) is 280 GB - the free space is only around 23 GB. I understand that the error is caused by the unavailability of the required disk space. Unfortunately, I cannot increase the disk space on this drive. However I have another drive (D) that has 400 GB of free space. I exported this VM to D drive and then tried to add the copied AVHD files but it gives me a similar error. I am running Windows Server 2008 R2 Datacenter. Any help is appreciated.

    Read the article

  • How can I ensure that my static ip address is read from /etc/network/interfaces rather than dhcp?

    - by jonderry
    This is a follow up to the following question. I'm trying to set a static IP by changing /etc/network/interfaces to the following: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.2.133 netmask 255.255.255.0 gateway 192.168.2.1 dns-nameservers 8.8.8.8 and then running /sbin/ifdown eth0; /sbin/ifup eth0. However, the change in IP address doesn't appear to take effect without editing /etc/dhcp/dhclient.conf and commenting out the following before running ifdown; ifup: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, dhcp6.name-servers, dhcp6.domain-search, netbios-name-servers, netbios-scope, interface-mtu, rfc3442-classless-static-routes, ntp-servers, dhcp6.fqdn, dhcp6.sntp-servers; Strangely, after commenting out this line, running ifdown; ifup works, but when I uncomment it, the behavior does not revert to the previous behavior of ignoring changes to my settings in /etc/network/interfaces (this doesn't seem like a problem, but I really need to be able to repeat this problem so that I can be confident that my solution is robust) Also, I'd rather not have to edit /etc/dhcp/dhclient.conf to change my static IP since it seems I should be able to do this by only editing interfaces. Can anyone explain the issues I'm seeing above and suggest the best way of making changes to static IP addresses take effect that admits reproducibility so that I can be sure that my approach works?

    Read the article

  • ProxyPass for specific vhost

    - by Steve Robbins
    I have a web server that it set up to dynamically server different document roots for different domains <VirtualHost *:80> <IfModule mod_rewrite.c> # Stage sites :: www.[document root].server.company.com => /home/www/[document root] RewriteCond %{HTTP_HOST} ^www\.[^.]+\.server\.company\.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^www\.([^.]+)\.server\.company\.com(.*) /home/www/$1/$2 [L] </IfModule> </VirtualHost> This makes it so that www.foo.server.company.com will serve the document root of server.company.com:/home/www/foo/ For one of these sites, I need to add a ProxyPass, but I only want it to be applied to that one site. I tried something like <VirtualHost *:80> <Directory /home/www/foo> UseCanonicalName Off ProxyPreserveHost On ProxyRequests Off ProxyPass /services http://www-test.foo.com/services ProxyPassReverse /services http://www-test.foo.com/services </Directory> </VirtualHost> But then I get these errors ProxyPreserveHost not allowed here ProxyPass|ProxyPassMatch can not have a path when defined in a location. How can I set up a ProxyPass for a single virtual host?

    Read the article

  • Why can't I connect to my home SSH (SFTP) server? What am I doing wrong?

    - by Rolo
    I am new to this topic of creating a SFTP server on one's computer. I would like to be able to access the folder on my Windows XP computer via SFTP from another computer or a phone. The following is what I have done so far: I have installed SSH Windows and everything is setup correctly because I can access it (the folder on my pc) via WinSCP. I however cannot access it from my phone. It doesn't connect. The phone can be on the same wireless network as the Windows XP computer, but I would prefer to be able to access this when not in the same network. Now, from what I have read and understood, the following is the information needed to connect: 1) Host Name: This would be my computer's ip address which I access by typing ipconfig in a cmd prompt (I access this easily on my computer because I simply put in localhost or 127.0.0.1) 2) Port Number: That would be port 22 (I have also added this to my router in the port forwarding section). 3) Username: This would be my Windows XP username. This however is my full name, including my middle initial followed by a period. I am wondering if this is maybe causing problems in accessing it from my phone, since the name has spaces and punctuation (the period). 4) Password: The password of my Windows XP computer Extra Info: When I say phone, I mean an Android phone and I am using an ftp / sftp app to access my pc via the phone's cellular network (I also tried the wireless, but that didn't work as well). I have tried more than one program. On one program it tells me Connection timed out and on another it tells me "timeout:socket is not established" Also, I know that I can use the site noip, but I prefer to connect this way first. Also, because I am new to this, I would like to look into what exactly noip is doing and if they would be seeing my files as they are transferred from phone to pc. Thanking you in advance for your help.

    Read the article

  • RDP with multiple monitors, display preferences get reset?

    - by Martijn Kooij
    Problem: When I connect to my pc at the office via RDP all the application windows I had previously carefully placed on either monitor 1 or 2 will be "scrambled". Either all applications show on monitor 1 and monitor 2 is empty, or they have switched 1 <- 2. Expected behaviour: When I connect I see all the application windows on exactly the same position and in the exact same size as I left them the night before. I have the exact same monitors at home as I have at work: Primary 2560x1440, Secondary 900x1440. Yesterday I tried switching the physical cables on the host machine hoping that the hardware order of the monitors was the difference. But this morning my secondary monitor was completely blank, not even the taskbar (which I had set to ONLY show on the secondary). Somewhere there must be something to help Windows understand which physical monitor is which virtual RDP monitor is which RDP "server" monitor... Are there more options than switching the cables? This one has been bothering me for a long long time now, I hope someone has a solution or workaround for me. Edit I want to use both monitors, so I have checked the "Use all monitors" setting in the RDP client. For example I leave my mail and total commander on the right monitor, and visual studio and Firefox on the left monitor. When I connect to RDP I want to see those applications on the same positions and sizes.

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • How to "open" existing VMs in Hyper-V without importing them?

    - by Borek
    I had a PC with two physical disks: C: containing the host operating system D: containing a folder D:\VMs where all my virtual machines were stored Now, the C: disk died. I bought a new one, reinstalled Windows on it, enabled Hyper-V feature and now I just need to open the VMs from the D:\VMs folder. However, I don't seem to be able to find a menu item or anything that would allow me to do that - the only thing I see is the "import" command which unfortunately requires the VMs to be explicitly exported (my machines weren't). I firmly believe that when I have all the files constituting a VM (the VHD file, some XML files describing the settings etc.) it must be somehow possible to just "open" these existing VMs in Hyper-V, right? What command am I missing? Edit: I know I can create a blank virtual machines and then just point them to use existing VHDs. However, I am not sure about all the different settings I've made to those VMs so I hope there's a way to simply open those existing VMs instead of recreating them.

    Read the article

  • How to make my Ubuntu an internet gateway for my Android phone

    - by yacine
    I want to use the internet of my school on my Android, the problem is they have a Squid proxy, and many applications on my phone don't use the proxy at all. The obvious solution is to install a transparent proxy on my Android to force all applications to connect through it. The problem is that I need to root the phone to make it work, and I don't want to do it because it's not really my phone and rooting is a little risky- Another solution, which is safer, is to make my computer run as a gateway, so I put my Ubuntu IP in the gateway parameter of the phone. I'm running a small proxy on my ubuntu (cntlm), so I redirect the Android traffic to it. I did it with "iptables" as follows: iptables -t nat -A PREROUTING -s 10.0.1.118 -p tcp -j REDIRECT --to-ports 8888 iptables -t nat -A PREROUTING -s 10.0.1.118 -p udp -j REDIRECT --to-ports 8888 10.0.1.118 is the IP of the phone, 8888 is the port of cntlm (proxy on my PC). Now, on the phone: When I enter www.google.com on the navigator I get nothing (web site not found, error message of Firefox). But, when I enter http://74.125.143.101 (IP of Google) I get an error message from the school proxy (so it worked in some way – my PC redirected the traffic of the phone to the Squid proxy). The error message is : The requested URL could not be retrieved while trying to process the request get / http/1.1 host 74.125.143.101 user-Agent ... ... I think the problem is in the "GET" header,it should be GET 74.125.143.101 HTTP/1.1. But I don't understand what's happening, and I'm a certified CCNA.

    Read the article

  • My server cant resolve domains?

    - by Nuker
    I am on a VPS that is pretty much unmanaged so it means im on my own. I did my best to configure it so i can host my own site for other people to see it online but seems like i have network problems because in the last days many of my users report they cant enter my site from my domain and seems like Google and Facebook cant either (this never happened before). Its weird because i can enter my site without problems and so many other people as well. But then i tried to make a php include and i get this error: Warning: include(): php_network_getaddresses: getaddrinfo failed: Name or service not known in I was told that seems like my server cant resolve domains. The includes work if i use IPs instead of domains. So it means i have a DNS problem or something? What can i do to fix it? Im on a Linux 2.6.32-431.11.2.el6.x86_64 on x86_64 CentOS Linux 6.5 Thank you. EDIT: i have this on my resolv.conf # Generated by NetworkManager # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com nameserver 8.8.8.8 nameserver 8.8.4.4

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    Hi, I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • LDAPS being redirected to 389

    - by Ikkoras
    We're trying to perform an LDAPS bind to a server which blocks 389 with a firewall so all traffic must travel over 636. In our test lab we're connecting to a test ldap (located on the same server) which does not have this firewall so both ports are exposed. Running ldp.exe on the test server we generate the trace below which seems to suggest that it is successfully binding over 636. However if we monitor the traffic with wireshark all the traffic is being sent to 389 with no attempt to even contact 636. Other tools will bind only with SSL on 636 or without SSL on 389 whjich seems to suggest it is behaving correctly but Wireshark shows 389. Only the test server we are using RawCap to capture the local loopback traffic. Any ideas? 0x0 = ldap_unbind(ld); ld = ldap_sslinit("WIN-GF49504Q77T.test.com", 636, 1); Error 0 = ldap_set_option(hLdap, LDAP_OPT_PROTOCOL_VERSION, 3); Error 0 = ldap_connect(hLdap, NULL); Error 0 = ldap_get_option(hLdap,LDAP_OPT_SSL,(void*)&lv); Host supports SSL, SSL cipher strength = 128 bits Established connection to WIN-GF49504Q77T.test.com. Retrieving base DSA information... Getting 1 entries: Dn: (RootDSE)

    Read the article

  • nginx root directory not forwarding correctly

    - by user66700
    The server files are store in /var/www/ Everything was working perfectly, then I've been getting the following errors 2011/01/28 17:20:05 [error] 15415#0: *1117703 "/var/www/https:/secure.domain.com/index.html" is not found (2: No such file or directory), client: 119.110.28.211, server: secure.domain.com, request: "HEAD /https://secure.domain.com/ HTTP/1.1", host: "secure.domain.com" Heres my config: server { server_name secure.domain.com; listen 443; listen [::]:443 default ipv6only=on; gzip on; gzip_comp_level 1; gzip_types text/plain text/html text/css application/x-javascript text/xml text/javascript; error_log logs/ssl.error.log; gzip_static on; gzip_http_version 1.1; gzip_proxied any; gzip_disable "msie6"; gzip_vary on; ssl on; ssl_ciphers RC4:ALL:-LOW:-EXPORT:!ADH:!MD5; keepalive_timeout 0; ssl_certificate /root/server.pem; ssl_certificate_key /root/ssl.key; location / { root /var/www; index index.html index.htm index.php; } }

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • Windows Server 2008 Alerting to Low memory

    - by t1nt1n
    I have a file and print server running on Windows 2008 R2 fully patched in a VSphere environment (ESXi 5.1 fully updated). Every evening between 19:20 and 19:30 our monitoring software reported that the available memory is 1% and performance is dire. There is nothing in the event logs to point to an issue. At this point in the evening I am general the only user on the system to check to see why these alerts are going off. Things I have done; Checked to see if any backups are running – None at all. Checked Scheduled tasks – None before or during this time period. Moved the VM to another host. AV is disabled to rule out that as the issue. The server does not have any problems during the day with memory when fully loaded with about 50 users. The server did have 4GB ram provisioned but I have increased this to 5Gb. Running PrefMon at the time (I will save the graphs tonight) There very little CPU usage at the time but RAM usage goes up.

    Read the article

  • Enabling `mod_rewrite` apache, permissions issues

    - by rudolph9
    In attempting to enable mod_rewrite on the Apache2 web server installed with Mac OSX 10.7.4. Following these instruction, ultimately using the configuration to host CakePHP applications, I run into permissions issues accessing the site via a web browser when I set the directory block associated with cakephp site /etc/apache2/users/username.conf from: <Directory "/Users/username/Sites/"> Options Indexes FollowSymLinks MultiViews AllowOverride none Order allow,deny Allow from all </Directory> /etc/apache2/users/username.conf to: <Directory "/Users/username/Sites/"> Options Indexes MultiViews AllowOverride none Order allow,deny Allow from all </Directory> <Directory "/Users/username/Sites/cakephp_app/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> The .htaccess files are the CakePHP 2.2.2 default as follows: /Users/username/Sites/cakephp_app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/webroot/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php [QSA,L] </IfModule> When performing the request via a web browser at to http://0.0.0.0/~username/cakephp_app/index.php the content of the response is Not Found The requested URL /Users/username/Sites/cakephp_app/app/webroot/ was not found on this server. Apache/2.2.21 (Unix) DAV/2 PHP/5.3.10 with Suhosin-Patch Server at 0.0.0.0 Port 80 Upon a request to http://0.0.0.0/~username/ and http://0.0.0.0/~username/cakephp_app/, added to /var/log/apache2/error_log accordingly are the following: [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/Users, referer: http://0.0.0.0/~username/ [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/favicon.ico What is causing the issue? Is there server program, ideally available via a homebrew script, which would make hosting CakePHP applications for testing purposes more effective and efficient?

    Read the article

  • Running Windows 7 physical disk virtualized under Linux

    - by CajunLuke
    I have an existing Windows 7 installation that I'd like to virtualize under Linux. Windows boots fine on Disk A, Linux boots fine on Disk B. (Both disks are SATA.) I can mount the Windows disk when in Linux. I've tried VirtualBox and VMWare Player and neither will allow me to boot from the other disk. VirtualBox doesn't seem to have the option to do so. VMWare Player has the option to have an IDE drive exposed to the virtual environment as a SCSI disk. I've tried that, but it throws the error "Cannot connect virtual device ide1:0 because no corresponding device is available on the host." I've verified that it's pointing to the correct hard drive. I'm willing to try other virtualization products, and I'm not averse to spending a little money to get this to work. I've seen this other question, and it's not a duplicate, as I haven't gotten that far yet. I'm also interested in solutions going the other way (Linux on Windows), but that'd be lagniappe. Gory Hardware Details: Lenovo T410, 2.4 GHz Core i5 (has virtualization extensions), 4GiB RAM, 2x 320 GiB SATA HDD, one in optical bay. Fedora 14 2.6.35.10-74.fc14.x86_64, Windows 7 32-bit.

    Read the article

  • Xen HVM Windows 2008 network bridge

    - by JavierMartinz
    I have a problem with the Windows Server 2008 guest (hvm). I can't get a network interface running for him. I also have a Debian guest and it's working ok, but I can't do it with the Win2k8 guest. When I started the VM, the machine freezes and I can't connect by ssh to the host. /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 188.165.B.C netmask 255.255.255.0 network 188.165.B.0 broadcast 188.165.255.255 gateway 188.165.B.254 brctl show bridge name bridge id STP enabled interfaces eth0 8000.e840f20acc28 no peth0 /etc/xen/xend-config.sxp ... (vif-script vif-bridge) (network-script 'network-bridge') ... /etc/xen/win2k8.cfg # Networking # vif = [ 'ip=5.39.F.G,mac=yy:yy:yy:yy:yy:yy,type=ioemu,bridge=eth0' ] /etc/xen/debian.cfg # Networking # vif = [ 'ip=178.33.D.E,mac=xx:xx:xx:xx:xx:xx' ] As you can see, in the Debian guest I only have to specify an IP address and a MAC. But if I put that in the Win2k8 guest, the machine does not start. I am using Xen 4.0

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >