Search Results

Search found 12836 results on 514 pages for 'host mechanic'.

Page 440/514 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    Hi, I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • LDAPS being redirected to 389

    - by Ikkoras
    We're trying to perform an LDAPS bind to a server which blocks 389 with a firewall so all traffic must travel over 636. In our test lab we're connecting to a test ldap (located on the same server) which does not have this firewall so both ports are exposed. Running ldp.exe on the test server we generate the trace below which seems to suggest that it is successfully binding over 636. However if we monitor the traffic with wireshark all the traffic is being sent to 389 with no attempt to even contact 636. Other tools will bind only with SSL on 636 or without SSL on 389 whjich seems to suggest it is behaving correctly but Wireshark shows 389. Only the test server we are using RawCap to capture the local loopback traffic. Any ideas? 0x0 = ldap_unbind(ld); ld = ldap_sslinit("WIN-GF49504Q77T.test.com", 636, 1); Error 0 = ldap_set_option(hLdap, LDAP_OPT_PROTOCOL_VERSION, 3); Error 0 = ldap_connect(hLdap, NULL); Error 0 = ldap_get_option(hLdap,LDAP_OPT_SSL,(void*)&lv); Host supports SSL, SSL cipher strength = 128 bits Established connection to WIN-GF49504Q77T.test.com. Retrieving base DSA information... Getting 1 entries: Dn: (RootDSE)

    Read the article

  • Enabling `mod_rewrite` apache, permissions issues

    - by rudolph9
    In attempting to enable mod_rewrite on the Apache2 web server installed with Mac OSX 10.7.4. Following these instruction, ultimately using the configuration to host CakePHP applications, I run into permissions issues accessing the site via a web browser when I set the directory block associated with cakephp site /etc/apache2/users/username.conf from: <Directory "/Users/username/Sites/"> Options Indexes FollowSymLinks MultiViews AllowOverride none Order allow,deny Allow from all </Directory> /etc/apache2/users/username.conf to: <Directory "/Users/username/Sites/"> Options Indexes MultiViews AllowOverride none Order allow,deny Allow from all </Directory> <Directory "/Users/username/Sites/cakephp_app/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> The .htaccess files are the CakePHP 2.2.2 default as follows: /Users/username/Sites/cakephp_app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/webroot/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php [QSA,L] </IfModule> When performing the request via a web browser at to http://0.0.0.0/~username/cakephp_app/index.php the content of the response is Not Found The requested URL /Users/username/Sites/cakephp_app/app/webroot/ was not found on this server. Apache/2.2.21 (Unix) DAV/2 PHP/5.3.10 with Suhosin-Patch Server at 0.0.0.0 Port 80 Upon a request to http://0.0.0.0/~username/ and http://0.0.0.0/~username/cakephp_app/, added to /var/log/apache2/error_log accordingly are the following: [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/Users, referer: http://0.0.0.0/~username/ [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/favicon.ico What is causing the issue? Is there server program, ideally available via a homebrew script, which would make hosting CakePHP applications for testing purposes more effective and efficient?

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • Windows Server 2008 Alerting to Low memory

    - by t1nt1n
    I have a file and print server running on Windows 2008 R2 fully patched in a VSphere environment (ESXi 5.1 fully updated). Every evening between 19:20 and 19:30 our monitoring software reported that the available memory is 1% and performance is dire. There is nothing in the event logs to point to an issue. At this point in the evening I am general the only user on the system to check to see why these alerts are going off. Things I have done; Checked to see if any backups are running – None at all. Checked Scheduled tasks – None before or during this time period. Moved the VM to another host. AV is disabled to rule out that as the issue. The server does not have any problems during the day with memory when fully loaded with about 50 users. The server did have 4GB ram provisioned but I have increased this to 5Gb. Running PrefMon at the time (I will save the graphs tonight) There very little CPU usage at the time but RAM usage goes up.

    Read the article

  • What ports, besides 80, need to be available to send (only send) email using phpmailer to gmail over SSL?

    - by Wobblefoot
    Using phpmailer I keep getting a 110 timeout and "Unable to connect to host" when sending email from my web server. The authentication details are right and they work on another server I have (login, pwd, ports etc and gmail acct set up for SSL connections on 465), but it's failing on my new server. FIREWALL: I allow related/established, port 80 and a port for SSH on INPUT, then this on OUTPUT: 7906 474K DROP tcp -- any any anywhere anywhere tcp dpt:smtp 0 0 ACCEPT tcp -- any any localhost.localdomain yw-in-f109.1e100.net tcp dpt:submission 0 0 ACCEPT tcp -- any any localhost.localdomain gx-in-f109.1e100.net tcp dpt:ssmtp 0 0 DROP tcp -- any any anywhere anywhere tcp dpt:submission 9 540 DROP tcp -- any any anywhere anywhere tcp dpt:ssmtp This output chain works on my other server and disabling it doesn't get mail delivered either. WEB SERVER: Varnish (80) Nginx (8088) Drupal 7 PHP5-FPM APC MySQL All works beautifully, except for outgoing email. What else could it be? I understand phpmailer does NOT require a local MTA or procmail (this is sort of the point - I don't want the security or admin overhead of a full blown MTA on my web server). Am I wrong? Do I need an MTA as well? What local ports and programs are used to authenticate over SSL and route mail using phpmailer? Any ideas at all greatly appreciated - wasted a day on this nonsense already!

    Read the article

  • Xen HVM Windows 2008 network bridge

    - by JavierMartinz
    I have a problem with the Windows Server 2008 guest (hvm). I can't get a network interface running for him. I also have a Debian guest and it's working ok, but I can't do it with the Win2k8 guest. When I started the VM, the machine freezes and I can't connect by ssh to the host. /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 188.165.B.C netmask 255.255.255.0 network 188.165.B.0 broadcast 188.165.255.255 gateway 188.165.B.254 brctl show bridge name bridge id STP enabled interfaces eth0 8000.e840f20acc28 no peth0 /etc/xen/xend-config.sxp ... (vif-script vif-bridge) (network-script 'network-bridge') ... /etc/xen/win2k8.cfg # Networking # vif = [ 'ip=5.39.F.G,mac=yy:yy:yy:yy:yy:yy,type=ioemu,bridge=eth0' ] /etc/xen/debian.cfg # Networking # vif = [ 'ip=178.33.D.E,mac=xx:xx:xx:xx:xx:xx' ] As you can see, in the Debian guest I only have to specify an IP address and a MAC. But if I put that in the Win2k8 guest, the machine does not start. I am using Xen 4.0

    Read the article

  • Running Windows 7 physical disk virtualized under Linux

    - by CajunLuke
    I have an existing Windows 7 installation that I'd like to virtualize under Linux. Windows boots fine on Disk A, Linux boots fine on Disk B. (Both disks are SATA.) I can mount the Windows disk when in Linux. I've tried VirtualBox and VMWare Player and neither will allow me to boot from the other disk. VirtualBox doesn't seem to have the option to do so. VMWare Player has the option to have an IDE drive exposed to the virtual environment as a SCSI disk. I've tried that, but it throws the error "Cannot connect virtual device ide1:0 because no corresponding device is available on the host." I've verified that it's pointing to the correct hard drive. I'm willing to try other virtualization products, and I'm not averse to spending a little money to get this to work. I've seen this other question, and it's not a duplicate, as I haven't gotten that far yet. I'm also interested in solutions going the other way (Linux on Windows), but that'd be lagniappe. Gory Hardware Details: Lenovo T410, 2.4 GHz Core i5 (has virtualization extensions), 4GiB RAM, 2x 320 GiB SATA HDD, one in optical bay. Fedora 14 2.6.35.10-74.fc14.x86_64, Windows 7 32-bit.

    Read the article

  • Redirect without changing URL

    - by Coobadivin
    Here's the setup. We have a hardware load balancer with an http virtual cluster. Let's call this virtual cluster example1.com. This virtual cluster load balances between two squid reverse proxies which are also on the same physical servers as the web servers. Squid listens on 80 and points to itself as the cache_peer web server which listens on 81. We also have a standalone web server which we will call example2.com. What we are trying to do is create a subdirectory on example1.com called example1.com/example2. This will point to example2.com, but we want our users to stay at example1.com/example2 in their browser. So, it's like a redirect without actually being a redirect. How the hell do I go about doing this? Is this even possible? I'm looking at squid docs in the meantime. example1.com is running a proprietary web server - not Apache :( We can't host example2.com's content in example1.com's file system. These are two very different platforms.

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • Virtual folder for multiple sites

    - by Cups
    I am creating a very simple flat file CMS for small (multilingual) websites. The little file writing that goes on is handled by 4 scripts in a publicly available folder in each site named /edit. Given that I have 2 websites now working on that simple system: websiteA/index.php (etc) websiteA/edit/ websiteB/index.php (etc) websiteB/edit/ What is the best way of making that /edit folder "virtual" in order that these and each subsequent website owner can login to their view of /edit and yet the code only exists in one place. I do not want the website owners to have to login from a central website, but from their own /edit directory. I have already read about different solutions seemingly using the <Directory> directive in my httpd.conf declaration for each website, and also using straight mod_rewrite but admit to now becoming confused about some of the terminology. Each website has its own config file which contains path settings and so on. What in your opinion is the best way to handle this? EDIT In light of a reply, I suppose that given a virtual host directive such as this: <VirtualHost 00.00.00.00:80> DocumentRoot /var/www/html/websitea.com ServerName www.websitea.com ServerAlias websitea.com DirectoryIndex index.htm index.php CustomLog logs/websitea combined </VirtualHost> Is it possible to create an alias inside that directive for the folder websitea.com/edit ?

    Read the article

  • Linux: Three default gateways?

    - by Daniel
    My server has three default gateways, how can that be? Shouldn't there be one default gw? I have three NICs, each attached to a separate subnet: server1:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.5.0.0 * 255.255.255.224 U 0 0 0 eth3 localnet * 255.255.255.224 U 0 0 0 eth0 192.168.8.0 * 255.255.255.192 U 0 0 0 eth1 default 10.5.0.1 0.0.0.0 UG 0 0 0 eth3 default 192.168.8.1 0.0.0.0 UG 0 0 0 eth1 default 10.1.0.1 0.0.0.0 UG 0 0 0 eth0 Sometimes, I can't ping a host on the Internet, sometimes I can. What I want is traffic to the Internet (0.0.0.0) routed through a specific NIC. Can I just add a route for 0.0.0.0 and default gw to one of the eth0-3 interfaces? Will it break my connection? I'm using Debian, here is my /etc/network/interfaces: # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug eth0 iface eth0 inet static address 10.1.0.4 netmask 255.255.255.224 network 10.1.0.0 broadcast 10.1.0.31 gateway 10.1.0.1 allow-hotplug eth1 iface eth1 inet static address 192.168.8.4 netmask 255.255.255.192 network 192.168.8.0 broadcast 192.168.8.63 gateway 192.168.8.1 allow-hotplug eth3 iface eth3 inet static address 10.5.0.4 netmask 255.255.255.224 network 10.5.0.0 broadcast 10.5.0.31 gateway 10.5.0.1

    Read the article

  • Folder Sharing NTFS permissions with Share Permission

    - by Muhammad Adly
    i have a problem on my domain, the history starting from when i had a server with WIN 2008 r2 installed with the following roles installed on it (AD, DNS, DHCP, File). From 1 month i decided to install a new server 2008 r2 server to get (AD, DNS, DHCP) and leave the file server on the old one. i did the following exactly: 1) robocopy all my data on external HDD 2) Install a new server with 2008 r2 3) transfer all 5 roles to transfer the domain to the new server (MainDC) 4) issue (NETLOGON, SYSVOL) not transferred but i decided to reinitialize them again an now they are operating (MainDC) 5) re-create and re-configure a new GPOs and link it to my OUs 6) reinstall Old server operating system with a fresh installation of WIN 2008 R2 (FileServer) 7) join my domain with my domain credentials. the issue when i tried to share folder on \fileserver the permissions that i had set in sharing permissions are applied on the main shared folder and subfolders. the security settings are not applied. i.e. Say i'm sharing \fileserver\MainFolder with sharing permission for Authenticated Users that can read, so every one can read this main shared folder, if i set security permission for \fileserver\MainFolder\User1 that User1 can Read\Write\Modify. User1 can not perform this processes when accessing it from Network Share, i tried alot of steps from topics online get ownership for folder, remove inheritance from parent folder, applying changes for child objects, i tried also to construct a new folder structure but also the same issue, i tried another host PC, also i get the same issue.

    Read the article

  • please take a look at my server's ram usage

    - by user66779
    Hi, i am a noob with servers. I have a centos5.5 vps with 512mb ram. My goal is to have it host just one magento store. I've installed Magento on the server without any control panel, by just installing lamp myself and whatever php extensions were necessary to get Magento to install. As soon as i visit my magento store, suddenly the ram on the vps is almost completely used, with only about 100mb left. Please see this screenshot of htop taken after just myself visited the website. http://img714.imageshack.us/img714/1944/screenouv.png As you can see there's only around 100mb left. Is that normal? I'm wondering if i might have done something stupid with the server that makes it very resource hungry. I installed apache from the centos base repo, php version 5.3 from the ius repository and mysql 5.1 also from ius repo. I haven't changed any of the default config files for any of these except to make memory_minimum 256 in php.ini. Is there anything i can do to make more ram free? I'm clueless but i see each Apache daemon is using 8% of available ram, and AFAIK each visitor needs one Apache daemon. So i would run out of ram with just a handful of visitors. Thanks for your advice.

    Read the article

  • Friendly Intranet Addresses

    - by Jmyster
    Relativly new to IIS. I'm attempting to set up multiple sites in my Intranet on one server. The server already has SharePoint Installed on it and has a binding *:80. So when I type //ServerName I get the home page of SharePoint. I get how that works. I set up a new site in IIS and set the Binding to *:30015. On a remote machine if I type //ServerName:30015 in a web browser, I get the new site. Awesome, working as intended. My Questions: Can/How do i set it up so that I can type //DivisionAppName or //Division.AppName and have it resolve itself to //ServerName:30015? Is this something I have to register with my Company's DNS server? I hope not, getting my corprate IT to assist is a nightmare. What I tried: I have added Bindings with the Host Name filled in with both DivisionAppName or Division.AppName and port 30015 but that doesn't seem to work.

    Read the article

  • Where can I legally obtain the 64bit version of Windows 8?

    - by Harsha K
    No, I am not looking to pirate. I bought a key through the Upgrade assistant (for just $15 due to the upgrade offer), but it downloaded an iso file that was between 2.3 and 2.5 GB. Which doesn't make sense to me, because the Evaluation version of Windows 8 x64 is closer to 3.4 GB in size. I assumed the Upgrade Assistant would be intelligent enough to realize that it is being run on a Windows 7 x64 machine and by extension, download the x64 code. Previously, I was able to legally download the ISOs (sans the keys, of course) from the Digital River host. I do not see an option to do that. I'm not interested in risking downloading a tampered ISO. I want to do it through Microsoft channels, but I just don't see how. As you may imagine, search terms such as "Windows 8 official download link" result in a plethora of obviously spyware infested piracy sites. If there's any non-exposing way for me to prove that I have legally purchased Windows and I'm genuinely looking for this answer, please let me know. For reference to what I am looking for it is similar to the answer given in this question for Windows 7: Where do I download Windows 7 (legally from Microsoft)?

    Read the article

  • SMTP Server setting on Windows 2008 R2

    - by user223298
    I am very very new to this and just trying to configure SMTP virtual server. I have followed a few threads to get it all running, but the mails are not being delivered. What I have done so far - 1) Install SMTP server. 2) SMTP server Properties General Tab - IP address is set to 'All Unassigned'. Access Tab - Authentication is anonymous access. Everything else is left to Default settings. Delivery Tab - Outbound security is anonymous access. In Advance section, entered the domain name in the FQDN field, and localhost in Smart host field. 3) Created an Inbound Rule for SMTP service to allow connections to Port 25. When I try to telnet, everything works up until the point the mail has to be send. Now, the sender's domain is different to the receiver's domain. Not sure if settings have to be changed to allow that? I had set the Relay restrictions on SMTP server, but because I couldn't send the mails, I thought I might as well make it work without the relay first. The error I see while sending the mail is 451 Timeout waiting for client input. I used to get some other error before when I had Relay restrictions on. Can anyone please point me in the right direction? Please let me know if you need more information. Thanks.

    Read the article

  • Virtual machine shows no network adapter

    - by Logman
    I had a an old Lotus/Domino server (R5), I just virtualized. It ran Windows 2000 server. I had to use Vmware Converter v3.x to create the vm because it was the only one I could find that could actually do a Win2k machine that had no service packs. It was just put out to pasture a couple months ago, so it isn't being used except to store the old email for archiving. It took a bit of work to get it onto the Win2008R2 servers hyper-v but I got it there. Problem now, is that the network adapter didn't show up. I could not install the guest additions because it needed sp4 + on win2k... so I installed sp4 onto the vm guest. Everything seems fine except the network adapter still isn;t showing up in device manager. NOthing. Now this server had an external ip, and I did not want it to be put onto the internal virtual network. I am going to use a dedicated adapter on the host (hyper-v server) if that matters... but this shouldn't matter if the the guests network adapter doesn't show at all. Thoughts?

    Read the article

  • Some websites hosted on my server cant be reached from some places.

    - by valter
    Hello. I have a bloblem that is causing me headaches to solve. I have a webserver at 100tb.com, running CentOS. I also have these nameservers setted up: 67.213.220.170 ns1.maisturismo.net 67.213.220.171 ns2.maisturismo.net My domain is at Godaddy. I added two Host Summary pointig to the nameserver ips... NS1 to the first IP, and NS2 to the second... Than I changed the nameservers of maisturismo.net to ns1.maisturismo.net and ns2.maisturismo.net http://img20.imageshack.us/i/dnswm.jpg/ Bellow the image showing my dns records to maisturismo.net http://img137.imageshack.us/i/nameservers.jpg/ Its strange... Everythink looks fine, but the webiste is not reachable from [zend2.com][1] proxy, and from some other places, like a friend's house, that dont use the same web provider that I use. I have another nameserver setted up on my server, that have the same problem, All websites that use it cant be reached from zend2.com and from my friends house, except a ".com.br"(Brazillian Domain). Do you have same idea about, what is causing this? I really cant imagine what is the problem... Thanks. [1]: http:// zend2.com

    Read the article

  • Mail server DNS failed to resolved by Mac clients

    - by Concordus Applications
    We have two internal DNS servers. One is located on a linux server box and the other is the router's DNS management. We set the linux box as primary DNS via DHCP and the router as secondary. We have a few Mac clients that are accessing our internal mail server (hostnamed "mail" internally). When using IMAP or SMTP against the mail server internally, the mac boxes will sometimes fail to locate the server. If I use NSLOOKUP I can see that "mail" is pointed to the correct IP address and is being resolved via the correct DNS server, but if I ping "mail" it fails. ~ (bash)$ nslookup mail Server: 254.254.254.206 Address: 254.254.254.206#53 Name: mail.example.com Address: 254.254.254.205 Note: I replaced our actual internal IP address with 254.254.254.* If I wait a few minutes (3-5 minutes), somehow it resolves itself and sends successfully. This happens multiple times a day. The /etc/hosts file on the mac boxes is the default config. ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Is there something about Mac clients I should know to prevent this failed DNS resolution? Client boxes are: OSX 10.7.4, 8GB RAM, i5 MacBooks Server is: Ubuntu 12.04 Server

    Read the article

  • How do I tunnel an HTTPS proxy through a virtual machine (VMWare)

    - by Kyle
    I have a personal setup at home using VMWare Workstation. I also have a set of Virtual Private Machines that run Squid, and therefore provide me HTTPS proxy tunnels. Using Proxifier, I can tunnel all traffic for given applications through these tunnels. However, I also have a few virtual machines for dev/staging/experimentation/etc. I generally just use NAT to provide Internet access to the machines, and if I need to use these proxies, I can just setup Proxifier (or a Linux equivalent) to pipe the traffic through them. No problem. But... I got to thinking: Wouldn't it be great if I could assign these proxy tunnels to a virtual machine, so that when I start up the VM, it has instant-on access through the tunnel and not my local connection? (EDIT: Of course, it would USE my local connection, but it would tunnel traffic through the proxy.) To be more clear: I want a solution that binds the proxy to a VM, so that when I start the VM, I don't have to use a proxy client to connect to the tunnel - I am already piping all traffic from that VM through that proxy. I did a bit of searching, and the closest thing I could find was this: How to route public static IP to a virtual machine on a vmware ESXi host? Which wasn't all that applicable. The proxies are protected by user/pass but do not filter by IP. Again, they are HTTPS proxies setup through Squid. Any ideas on how to make this happen? Thanks a ton.

    Read the article

  • CentOS and OpenSSH [on hold]

    - by Stephen
    I've recently installed CentOS 6 on an old Dell PC. I'm trying to setup OpenSSH at the moment, I been following some tutorials (http://www.youtube.com/watch?v=QKafb0koJEg) on You Tube, while they have been very helpful I'm at the point where I need to ask some questions. My goal here is to be able to access the server from my work computer and from my personal laptop (which will be on the same home network as the server). I've installed OpenSSH with no issues. So the first thing I was advised to do was port forwarding. So in the sshd_config file, I've changed Port 22 to Port xxxx (where xxxx is a obviously a four digit value). I then restart the sshd service. I've also configured my router for forward port 22 onto xxxx. Is there anything else I need to do? I've generated the keys on my laptop, and I'm trying to copy them to the server as follows: scp id_rsa.pub xxxxxxxx@localhost:.ssh/authorized_keys but this command fails with the following error message: ssh: connect to host localhost port 22: Connection refused lost connection Any help appreciated. Regards...

    Read the article

  • allow public access to subfolder of protected folder on apache

    - by UnnamedMook
    I have password-protected the root folder of my website while i do maintenance, but I want to display a custom 401 error page to let people know the site is under construction. Unfortunately, my web host doesn't allow me write access to anything outside the root folder of my website, so this custom error page must by stored in the root folder or one of its subfolders. Instead of my custom error page I get the Apache default error page and it also says "Additionally, a 401 Authorization Required error was encountered while trying to use an ErrorDocument to handle the request." I searched for ways to make a subfolder of a protected directory public, and all I could find was to use the "Satisfy any" directive, but this doesn't work for me. It doesn't work on a file-only basis either, as with the .htaccess file below. #Authorization Restriction AuthType Basic AuthName "Access to root" AuthUserFile ********************************* Require user *********** Order Allow,Deny Satisfy any #Error Documents ErrorDocument 401 Error-401.html #Allow access to error documents <Files Error-*,html> Order Deny,Allow Allow from all Satisfy any </Files> I can only use .htaccess files; I don't have access to httpd.conf

    Read the article

  • My Ubuntu 10.04 server kills all WAN bandwidth when it's attached to my LAN. Where do you begin troubleshooting?

    - by rrc7cz
    First I should say that my Linux knowledge is minimal; just enough to set up some servers (Apache, Tomcat, Couch, etc). I built a MiniITX server to host some simple sites, act as an SSH tunnel while I'm away, and act as a torrent server. It was not properly secured for a long time (iptables was empty, all ports open, no firewall) though my router did not have much port forwarding set up beyond HTTP, FTP, and SSH. A week or two ago my bandwidth at home dropped from around 27Mbps to 2Mbps and my upload went from 7Mbps to 0.06Mbps. When I unplug the server from the LAN, by bandwidth shoots back up. I threw up a restrictive iptables, removed most of the port forwarding, and checked my router logs to see if there were any open connections from the server (malware?) but there were none. What would you do? What are the first things you'd check? I can of course reinstall everything from scratch, but I'd like to find the root cause.

    Read the article

  • Set document root for external subdomain (A Record) via htaccess

    - by 1nsane
    I have a managed server (unable to control apache settings) with the default document root of: /var/www I have a web app running in: /var/www/subdomains/app/webroot I have a dedicated domain managed by the host that has the aforementioned webroot which works perfectly. I would like to allow externally provisioned domains to point to the server/web app via A Record config. If I access the site via IP, it takes me to the index located in /var/www. I would like to configure the .htaccess in my /var/www directory to rewrite requests from the external subdomain to the /var/www/subdomains/app/webroot directory. I've done so using the following rules: RewriteCond %{HTTP_HOST} external\.domain\.com$ [NC] RewriteRule ^(.*)$ /var/www/subdomains/app/webroot/index.php?url=$1 [L,QSA] When accessing external.domain.com, the app loads properly, but the paths to things like CSS files, images, etc. are prefixed with "/subdomains/app/", causing broken links. I've tried changing the RewriteBase (both in /var/www and /var/www/subdomains/app/webroot), as I believe that's what it's designed for - but no luck. Any ideas? FYI the app is built on CakePHP. Thanks

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >