Search Results

Search found 6715 results on 269 pages for 'preg match'.

Page 234/269 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • HTTP Content-type header for cached files

    - by Brian
    Hello, Using Apache with mod_rewrite, when I load a .css or .js file and view the HTTP headers, the Content-type is only set correctly the first time I load it - subsequent refreshes are missing Content-type altogether and it's creating some problems for me. Specifically, gzip is not compressing these files. I can get around this by appending a random query string value to the end of each filename, eg. http://www.site.com/script.js?12345 However, I don't want to have to do that, since caching is good and all I want is for the Content-type to be present. I've tried using a RewriteRule to force the type but still didn't solve the problem. Any ideas? Thanks, Brian More Details: HTTP headers WITHOUT random query string value: http://localhost/script.js GET /script.js HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; If-Modified-Since: Thu, 29 Apr 2010 15:49:56 GMT If-None-Match: "3440e9-119ed-485621404f100" Cache-Control: max-age=0 HTTP/1.1 304 Not Modified Date: Thu, 29 Apr 2010 20:19:44 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Connection: Keep-Alive Keep-Alive: timeout=5, max=100 Etag: "3440e9-119ed-485621404f100" Vary: Accept-Encoding X-Pad: avoid browser bug HTTP headers WITH random query string value: http://localhost/script.js?c947344de8278053f6edbb4365550b25 GET /script.js?c947344de8278053f6edbb4365550b25 HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 20:14:40 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Last-Modified: Thu, 29 Apr 2010 15:49:56 GMT Etag: "3440e9-119ed-485621404f100" Accept-Ranges: bytes Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 24605 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: application/javascript

    Read the article

  • Passwordless ssh failed when login using username

    - by Aczire
    I was trying to setup Hadoop and was stumbled on passwordless ssh to localhost. I am getting a password prompt when trying to connect using ssh username@hostname format. But there is no problem connecting to the machine like ssh localhost or ssh hostname.com. Tried ssh-copy-id user@hostname but it did not work. Using CentOS 6.3 as normal user, I neither have root access or am a sudoer so editing any files like sshd_config is not possible (not even cat the sshd_config file contents). I hope the user login is possible since I can do login without password to localhost, right? Please advise, Here is the ssh debug output. [[email protected] ~]$ ssh -v [email protected] OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to hostname.com [::1] port 22. debug1: Connection established. debug1: identity file /home/user/.ssh/identity type -1 debug1: identity file /home/user/.ssh/id_rsa type -1 debug1: identity file /home/user/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'hostname.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Offering public key: /home/user/.ssh/id_dsa debug1: Server accepts key: pkalg ssh-dss blen 434 Agent admitted failure to sign using the key. debug1: Trying private key: /home/user/.ssh/identity debug1: Trying private key: /home/user/.ssh/id_rsa debug1: Next authentication method: password [email protected]'s password:

    Read the article

  • Handling site not found and page not found with dynamic mass virtual hosting

    - by Rick Moynihan
    I have recently setup mass virtual hosting in Apache so that all we need to do is create a directory to create a new vhost. We're then also using wildcard DNS to map all subdomains to the server running our Apache instance. This works excellently, however I'm now having trouble configuring it to fail-over to an appropriate default/error-page when the vhost directory does not exist. The problem appears to be conflated between by my desire to handle the two error conditions: vhost not found i.e. there was no directory found matching the host supplied in the HTTP host header. I'd like this to display an appropriate site not found error page. The 404 page not found condition of the vhost. Additionally I have a specialised "api" vhost in its own vhost block. I've tried a number of variations and none seem to exhibit the behaviour I want. Here's what I'm working with right now: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /var/www/site-not-found ServerName sitenotfound.mydomain.org ErrorDocument 500 /500.html ErrorDocument 404 /500.html </VirtualHost> <VirtualHost *:80> ServerName api.mydomain.org DocumentRoot /var/www/vhosts/api.mydomain.org/current # other directives, e.g. setting up passenger/rails etc... </VirtualHost> <VirtualHost *:80> # get the server name from the Host: header UseCanonicalName Off VirtualDocumentRoot /var/www/vhosts/%0/current # other directives ... e.g proxy passing to api etc... ErrorDocument 404 /404.html </VirtualHost> My understanding is that the first vhost block is used as the default, so I have this here as my catch all site. Next I have my API vhost, and then finally my mass vhost block. So for a domain that doesn't match the first two ServerName's and has no corresponding directory in /var/www/vhosts/ I'd expect it to fall-over to the first vhost, however with this setup, all domains resolve to my default site-not-found. Why is this? By putting the mass-vhost block first, I can get the mass-vhosts to resolve properly, but not my site-not-found vhost... and in this case I can't seem to find a way to distinguish between a page-level 404 in the vhost, and the case where the VirtualDocumentRoot fails to find a vhost directory (this appears to use the 404 also). Any help out of this bind is much appreciated!

    Read the article

  • ssh refuses to authenticate keys

    - by MixturaDementiae
    So I am setting up a connection between my machine [fedora 17] and a virtual machine running in Virtual Box in which is running CentOS 5. I have installed openssh from the repositories on CentOS, and I have configured everything as it follows: Protocol 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key SyslogFacility AUTHPRIV PermitRootLogin yes RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile /home/pigreco/.ssh/authorized_keys PasswordAuthentication no ChallengeResponseAuthentication yes GSSAPIAuthentication yes GSSAPICleanupCredentials yes UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS X11Forwarding yes Subsystem sftp /usr/libexec/openssh/sftp-server this is the configuration file sshd_config on the server i.e. on the CentOS. Moreover I have created a public/private key pair as usual on the .ssh/ folder in my home directory in my OS, i.e. Fedora, and then I've copied with scp the id_rsa.pub to the server and then I have appended its content to the file .ssh/authorized_keys on the server machine. The error that I get is the following: OpenSSH_5.9p1, OpenSSL 1.0.0j-fips 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 50: Applying options for * debug1: Connecting to 192.168.100.13 [192.168.100.13] port 22. debug1: Connection established. debug1: identity file /home/mayhem/.ssh/identity type -1 debug1: identity file /home/mayhem/.ssh/identity-cert type -1 debug1: identity file /home/mayhem/.ssh/id_rsa type 1 debug1: identity file /home/mayhem/.ssh/id_rsa-cert type -1 debug1: identity file /home/mayhem/.ssh/id_dsa type -1 debug1: identity file /home/mayhem/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 16:e5:72:d1:37:94:1b:5e:3d:3a:e5:da:6f:df:0c:08 debug1: Host '192.168.100.13' is known and matches the RSA host key. debug1: Found key in /home/mayhem/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,keyboard-interactive debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Cannot determine realm for numeric host address debug1: Unspecified GSS failure. Minor code may provide more information Cannot determine realm for numeric host address debug1: Unspecified GSS failure. Minor code may provide more information debug1: Unspecified GSS failure. Minor code may provide more information Cannot determine realm for numeric host address debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/mayhem/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 279 Agent admitted failure to sign using the key. debug1: Trying private key: /home/mayhem/.ssh/identity debug1: Trying private key: /home/mayhem/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Do you have some good suggestion of what I can do? thank you

    Read the article

  • what can cause a folder to become indestructible?

    - by JustJeff
    I have a directory that I want to delete, but windows (xp sp3) is giving me the run-around and the folder is now effectively indestructible. Attempts to open the folder, either via explorer or cmd.exe are met with 'd:/temp/foo Is Not Accessible. Access is denied'. Attempts to delete the folder result in 'Cannot delete foo: The directory is not empty' So I can't delete it because supposedly it's not empty, but windows won't let me in it for some reason, so I can't clean it out first. There's nothing in it of consequence, and basically I just want to delete it at this point. Thinking that some other process must have a lock on it, I used the SysInternals 'handles' and Process Explorer to look for open handles with the directory name. These turned up no matches. (The directory name is not actually 'foo', it is something more unique but 'foo' is easier to type here). I put the machine through a restart, and the problem persists. I did a search for the folder name with regedit, to see what other apps might be aware of it. No match. The properties dialog was mildly interesting. The Read-Only attribute is 'semi-checked', i.e., the grayish check mark you get when some parts are and some parts aren't. Naturally I immediately unchecked this, and tried to delete the folder. No go. Opening properties again reveals the gray check mark next to Read-Only has returned. All the stats, size, size on disk, files, folders, all these are zero. There do not appear to be any shares on the folder, so that's not it either. Finally, I tried opening the partition's properties, and running the Tools/Error Checking utility. This didn't turn up any problems either. Fwiw, this directory was created by [a popular gui zip tool] when I tried to unpack a tar-and-zipped archive created on another system with command line utils. The archive was definitely corrupt, but I've never seen such a file do anything worse than crash the zip app, and certainly never leave permanent glitches in the file system. So what else can possibly be going on to make this folder behave this way?

    Read the article

  • 502 Bad Gateway with nginx + apache + subversion + ssl (SVN COPY)

    - by theplatz
    I've asked this on stackoverflow, but it may be better suited for serverfault... I'm having a problem running Apache + Subversion with SSL behind an Nginx proxy and I'm hoping someone might have the answer. I've scoured google for hours looking for the answer to my problem and can't seem to figure it out. What I'm seeing are "502 (Bad Gateway)" errors when trying to MOVE or COPY using subversion; however, checkouts and commits work fine. Here are the relevant parts (I think) of the nginx and apache config files in question: Nginx upstream subversion_hosts { server 127.0.0.1:80; } server { listen x.x.x.x:80; server_name hostname; access_log /srv/log/nginx/http.access_log main; error_log /srv/log/nginx/http.error_log info; # redirect all requests to https rewrite ^/(.*)$ https://hostname/$1 redirect; } # HTTPS server server { listen x.x.x.x:443; server_name hostname; passenger_enabled on; root /path/to/rails/root; access_log /srv/log/nginx/ssl.access_log main; error_log /srv/log/nginx/ssl.error_log info; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; add_header Front-End-Https on; location /svn { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; set $fixed_destination $http_destination; if ( $http_destination ~* ^https(.*)$ ) { set $fixed_destination http$1; } proxy_set_header Destination $fixed_destination; proxy_pass http://subversion_hosts; } } Apache Listen 127.0.0.1:80 <VirtualHost *:80> # in order to support COPY and MOVE, etc - over https (443), # ServerName _must_ be the same as the nginx servername # http://trac.edgewall.org/wiki/TracNginxRecipe ServerName hostname UseCanonicalName on <Location /svn> DAV svn SVNParentPath "/srv/svn" Order deny,allow Deny from all Satisfy any # Some config omitted ... </Location> ErrorLog /var/log/apache2/subversion_error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/subversion_access.log combined </VirtualHost> From what I could tell while researching this problem, the server name has to match on both the apache server as well as the nginx server, which I've done. Additionally, this problem seems to stick around even if I change the configuration to use http only.

    Read the article

  • dd-wrt router firmware QoS troubleshooting

    - by Jeff Atwood
    I've been using the dd-wrt firmware on my router and I like it a lot! But -- I'm not sure the quality of service (QoS) is working on it. I have it set up as follows: http, port 80 -- Premium bittorrent, port 6969 -- Bulk https, port 443 -- Premium dns, port 53 -- Premium Per the QoS documentation, these levels are: bandwidth is allocated based on the following percentages of uplink and downlink values for each class: Exempt: 100mbps - ignores global limits. Premium: 75% - 100% Express: 15% - 100% Standard: 10% - 100% Bulk: 1.5% - 100% This doesn't entirely seem to work, though -- with busy torrents going I get major pauses in my web browsing which sucks! The QoS documentation gives some steps to check the QoS ... What you'll be interested to look at will be the first set of source and destination IP, including the port numbers. Next the presence of l7proto and the "mark" field. The entries indicate the current live connection QoS priority applied on them based on the "mark" field. The "mark" values correspond to the following Exempt: 100 Premium: 10 Express: 20 Standard: 30 Bulk: 40 (no QoS matched): 0 You may see "mark=0" for some l7proto service even though they are in configured in the list of QoS rules. This may mean that the layer 7 pattern matching system didn't match a new or changed header for that protocol. Custom service on port matches will usually take care of these. On port 6969 (bittorrent) I see a weird mixture of stuff with mark=0 and mark=40 like so cat /proc/net/ip_conntrack udp 17 105 src=98.162.182.42 dst=1.2.3.4 sport=64512 dport=6969 packets=3 bytes=290 src=10.0.0.2 dst=98.162.182.42 sport=6969 dport=64512 packets=4 bytes=202 [ASSURED] mark=0 secmark=0 use=1 tcp 6 117 TIME_WAIT src=98.248.173.174 dst=1.2.3.4 sport=51114 dport=6969 packets=12 bytes=704 src=10.0.0.2 dst=98.248.173.174 sport=6969 dport=51114 packets=10 bytes=440 [ASSURED] mark=40 secmark=0 use=1 tcp 6 598 ESTABLISHED src=165.132.128.201 dst=1.2.3.4 sport=57218 dport=6969 packets=8024 bytes=9919881 src=10.0.0.2 dst=165.132.128.201 sport=6969 dport=57218 packets=4211 bytes=239607 [ASSURED] mark=0 secmark=0 use=1 tcp 6 586 ESTABLISHED src=68.46.9.24 dst=1.2.3.4 sport=64688 dport=6969 packets=6 bytes=490 src=10.0.0.2 dst=68.46.9.24 sport=6969 dport=64688 packets=8 bytes=944 [ASSURED] mark=40 secmark=0 use=1 udp 17 45 src=222.254.228.38 dst=1.2.3.4 sport=25438 dport=6969 packets=5 bytes=454 src=10.0.0.2 dst=222.254.228.38 sport=6969 dport=25438 packets=3 bytes=154 [ASSURED] mark=0 secmark=0 use=1 ( full file visible at http://pastebin.com/AZE6EtWm ) I've been playing around with this log for a little while and I can't see any patterns! Why is some port 6969 bittorrent traffic tagged mark=0 (not matched) by dd-wrt's QoS while others are tagged mark=40 (Bulk) .. any ideas?

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • Can Vagrant point to a directory of Puppet manifests for execution?

    - by SeligkeitIstInGott
    I am using Vagrant to jump start some initial Puppet config and am confused on how to include/run multiple manifests (other than just site.pp) in the puppet execution workflow without making the extra manifests into modules and including them that way. In the puppet manifests directory that I point Vagrant to (see below) I have two manifests that I want executed: site.pp and hierasetup.pp. config.vm.provision "puppet" do |puppet| puppet.manifests_path = "puppet_files/manifests" puppet.module_path = "puppet_files/modules" puppet.manifest_file = "site.pp" puppet.options = "--verbose --debug" end Currently I am having site.pp be the manifest that calls hierasetup.pp. My site.pp looks like this: File { owner => 'root', group => 'root', mode => '0644', } import "hierasetup.pp" include jboss But I get this error about the deprecation of "import": Warning: The use of 'import' is deprecated at /tmp/vagrant-puppet-1/manifests/site.pp:33. See http://links.puppetlabs.com/puppet-import-deprecation (at grammar.ra:610:in `_reduce_190') According to the referenced URL under "Things to try instead" it says "To keep your node definitions in separate files, specify a directory as your main manifest". Further this puppet doc on main manifests says: "Recommended: If you’re using the main manifest heavily instead of relying on an ENC, consider changing the manifest setting to $confdir/manifests. This lets you split up your top-level code into multiple files while avoiding the import keyword. It will also match the behavior of simple environments." It appears that Puppet can reference an entire directory instead of just a specific manifest file, such that I would expect that Vagrant would make a provision for this and allow me to drop the "puppet.manifest_file = "site.pp" line and point to the parent directory instead in which all the *.pp files there will be executed. However removing that line in Vagrant merely generates a complaint about an expected "default.pp" in its stead: puppet provisioner: * The configured Puppet manifest is missing. Please specify a path to an existing manifest: /some/path/puppet_files/manifests/default.pp So: Firstly, do I understand the "new" (non-import) way of calling multiple manifests correctly, in that a directory is to be pointed to in which all the *.pp files inside it will be executed? And secondly, has Vagrant "caught up" with this new change to accommodate the referencing of directories in conjunction with Puppet's deprecation of "import"? Update: Thanks to Shane the issue with #2 (Vagrant's code not being caught up to allow pointing to puppet manifest directories) was reported on Vagrant's GitHub issue tracker site and has since been patched: https://github.com/mitchellh/vagrant/issues/4169

    Read the article

  • HTTPS request to a specific load-balanced virtual host (using Shibboleth for SSO)?

    - by Gary S. Weaver
    In one environment, we have three servers load balanced that have a single Tomcat instance on each, fronted by two different Apache virtual hosts. Each of those two virtual hosts (served by all three servers) has its own different load balancer. Internally, the first host (we'll call it barfoo) is served by port 443 (HTTPS) with its cert and the second host (we'll call it foobar) is served by port 1443 (HTTPS). When you hit foobar, it goes to the load balancer which is using IP affinity for that host, so you can easily test login/HTTPS on one of the servers serving foobar, but not the others (because you keep getting that server for the lifetime of the LB session, iirc). In addition, each of the servers are using Shibboleth v2 for authN/SSO, using mod_shib (iirc). So, a normal request to foobar hits the LB, is directed to the 3rd server (and will do that from then on for as long as the LB session lasts), then Apache, then to the Shibboleth SP which looks at the request, makes you login via negotiation with the Shibboleth IdP, then you hit Apache again which in turn hits Tomcat, renders, and returns the response. (I'm leaving out some steps there.) We'd like to hit one of the individual servers (foobar-03.acme.org which we'll say has IP 1.2.3.4) via HTTPS (skipping the load balancer), so we at first try putting this in /etc/hosts: 1.2.3.4 foobar.acme.org But since foobar.acme.org is a secondary virtual host running on 1443, it attempts to get barfoo.acme.org rather than foobar.acme.org at port 1443 and see that the cert for barfoo.acme.org is invalid for this case since it doesn't match the request's host, foobar.acme.org. I thought an ssh tunnel might be easy enough, so I tried: ssh -L 7777:foobar-03.acme.org:1443 [email protected] I tried just hitting https://localhost:7777/webappname in a browser, but when the Shibboleth login is over, it again tries to redirect to barfoo.acme.org, which is the default host for 443, and we get into an infinite redirect loop. I then tried setting up an SSH tunnel with privileged port 443 locally going to 443 of foobar-03.acme.org as the hostname for that virtual host: sudo ssh -L 443:foobar-03.acme.org:1443 [email protected] I also edited /etc/hosts to add: 127.0.0.1 foobar.acme.org This finally worked and I was able to get the browser to hit the individual HTTPS host at https://foobar.acme.org/webappname, bypassing the load balancer. This was a bit of a pain and wouldn't work for everyone, due to the requirement to use the local 443 port and ssh to the server. Is there an easier way to browse to and log into an individual host in this case?

    Read the article

  • Deployment/provisioning tool for commercial applications (not developed in-house)

    - by mfinni
    I help manage a few hosted commercial applications, and we have a lot of manual processes involved when doing new customer-instance deployments into the shared (multitenant) environment. Allow me to describe the most relevant features, and then we can talk about the tools. We have an application on AIX, that requires dozens of changes to config files (some plain text, some XML) as well as a good number of commands to be run on multiple servers - some to start the new instance, some to restart our shared authentication and reporting engines, etc. The config changes follow templates, of course. The servers in question will also depend on the initial conditions specified by the implementer/deployer - we may choose to deploy a given customer to our servers in Europe, or one set of servers may be active-active whereas a different set of servers is active-passive - in short, there's a lot of complications. We have another application that run on IIS 6 and SQL. The DBAs don't want any automation of the SQL components and that's fine with me, but automating the IIS bit would be great. For a new customer instance, we make a filesystem copy of a template Virtual Directory target named after the new customer, make a new AppPool to match, edit a VirDir template .xml file to replace the filepaths and AppPool names with the new ones, and then make a new VirDir from the modified template XML to point to the new filesystem folder and app pool. For the first case, something like ControlTier or Chef might be good. For the second, the new(ish) Web Deploy from MS would probably do a good job. Has anyone used these tools or others to do something similar for applications? More of a nice-to-have, not a fixed requirement - Has anyone used anything that works on both platforms? I'm looking for something free, because the official word is that within a year, we will have whatever HP has renamed the OpsWare suite, which should be able to do stuff like this. Edit - based on someone's suggestion, looking at CFengine for the AIX application, it doesn't seem to address my pain. The problem isn't keeping a given config synced across dozens of servers, we have rsync for that. The problem is that onboarding a new customer instance touches dozens of files, putting pieces of the same or similar information into them - some are new stanzas in existing files, some are new files, and some are new directories. This is a several-hours-long process that is also error-prone because it's mostly done by hand. I guess I'm looking for config-file generation and management. I have built a small Perl script to do something similar for a much smaller case - it binds a CSV file into variables, and then does a copy-and-search-and-replace from a set of template config files. I could probably do the same here.

    Read the article

  • Out of nowhere, ssh_exchange_identification: Connection closed by remote hot me too

    - by dgerman
    See similar: Out of nowhere, ssh_exchange_identification: Connection closed by remote host Today, 6/19/12 attempting to ssh to the same host as usual ssh replied ssh_exchange_identification: Connection closed by remote host two additional attempts failed ssh -v $RWS OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to Real-World-Systems.com [174.127.119.33] port 22. debug1: Connection established. debug1: identity file /Users/dgerman/.ssh/id_rsa type 1 debug1: identity file /Users/dgerman/.ssh/id_rsa-cert type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host ping host was successful, ftp host was successful, ssh now successful, ssh -v $RWS OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to Real-World-Systems.com [174.127.119.33] port 22. debug1: Connection established. debug1: identity file /Users/dgerman/.ssh/id_rsa type 1 debug1: identity file /Users/dgerman/.ssh/id_rsa-cert type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3 debug1: match: OpenSSH_4.3 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'real-world-systems.com' is known and matches the RSA host key. debug1: Found key in /Users/dgerman/.ssh/known_hosts:5 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dgerman/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Trying private key: /Users/dgerman/.ssh/id_dsa debug1: Next authentication method: password ++++ What gives?? +++++++++++ Mac OS X 10.4.7 , OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011, /Users/dgerman/.ssh > ls -la total 24 drwx------ 7 dgerman staff 238 Jun 19 15:46 . drwxr-xr-x 389 dgerman staff 13226 Jun 19 15:46 .. -rw------- 1 dgerman staff 1766 Feb 26 18:25 id_rsa -rw-r--r-- 1 dgerman staff 400 Feb 26 18:25 id_rsa.pub -rw-r--r-- 1 dgerman staff 67 Feb 26 18:27 keyfingerprint -rw-r--r-- 1 dgerman staff 6215 May 1 08:11 known_hosts -rw-r--r-- 1 dgerman staff 220 Feb 26 18:26 randomart

    Read the article

  • open source knowledge base CMS system

    - by Thomi
    I'm looking for an open source knowledge base system that uses tags, rather than free-text search to identify articles (a lot like serverfault does). I've looked at twiki, which many people suggested, but haven't found what I'm looking for. Basically I want to be able to create and tag articles, and provide an easy way for anonymous users to search based on tags. Edit: OK, here's some more detail regarding what I want. Basically, all the knowledge base systems I have seen so far are a collection of articles, each article with a title. Most of them allow you to categorise articles into groups and sub-groups. Users of the system can search for information using a title search, for example "How do I print from AwesomeProduct?" - which then shows a list of any articles that match that search text. This is fine and dandy when your KB is for one version of the software product (the mythical AwesomeProduct ver 1.0). However, the development team then go ahead and create a new version (ver 2.0) that adds many new features and changes some existing features. Now, how do we support both products in the same KB? The Naive method is to copy all articles from 1.0, and update them for 2.0, adding and removing articles in 2.0 as required. We can then add text at the top of every 1.0 article that says: "this articles applies to 1.0 only, to see the 2.0 version, click here" (or something similar) The problem with articles being indexed in the system by title is that it's very hard to filter based on meta-data like version. What happens when we create version 3.0 or 4.0? The end-situation here is that you have a mess of articles. They're hard to search, hard to filter, and even harder to manage. The solution (it seems to me) is to use tags, rather than text as the article index mechanism. So articles can be tagged with a tag representing the software version, topic area etc. etc. Users can then filter based on tag - an example search might be "version_1 printing" - which straight away gives a list of articles with all these tags. So that's what I'm looking for - a KB system that uses tags, rather than text to index many articles. I'm sure I could build something with drupal, but I was hoping for something that worked out-of-the-box.

    Read the article

  • Unable to telnet out on port 25 on windows server 2008

    - by NickGPS
    Hi All, I just setup a Windows 2008 R2 server and am trying to get a basic mail server up and running so that I can send emails from my applications. I setup a virtual SMTP server in IIS6 and tried doing a local telnet to port 25, which seemed to work fine. There were no errors during this stage and I can see the mail message appear in the Queue folder. The problem is that mail never leaves the Queue folder. I then tried to telnet to a remote mail server on port 25 but couldn't connect:- telnet 209.85.227.27 25 Could not open connection to the host, on port 25: Connection failed) I checked my firewall and there is a default setting to allow all outgoing TCP traffic with no restriction. I even setup a specific rule for outgoing port 25 traffic but to no avail. I then ran a SmtpDiag.exe command .\SmtpDiag.exe [email protected] [email protected] and received the following output Searching for Exchange external DNS settings. Computer name is WIN-SERVERNAME. Failed to connect to the domain controller. Error: 8007054b Checking SOA for gmail.com. Checking external DNS servers. Checking internal DNS servers. SOA serial number match: Passed. Checking local domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Local DNS test passed. Checking remote domain records. Checking MX records using TCP: gmail.com. Checking MX records using UDP: gmail.com. Both TCP and UDP queries succeeded. Remote DNS test passed. Checking MX servers listed for [email protected]. Connecting to gmail-smtp-in.l.google.com [209.85.227.27] on port 25. Connecting to the server failed. Error: 10060 Failed to submit mail to gmail-smtp-in.l.google.com. Is there any other diagnostics I can do to figure out if it's my firewall or something else? I have removed antivirus to make sure that it wasn't causing the problem. Any ideas would be much appreciated.

    Read the article

  • NIC is receiving, but not transmitting at all?

    - by Shtééf
    I'm trying to fix a very strange problem remotely on a machine at a customer site. The machine is a Dell PowerEdge, I believe a 1950 (haven't verified, but the lspci output matches specs I found.) The machine has two similar NICs, identified as Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12) by lspci, and using the bnx2 driver. (I suspect these are on-board and on the same controller, which is what I'm accustomed to for this type of machine.) The primary interface eth0 works perfectly, and is in fact how I am ssh'd in. However, the secondary interface eth1 is not transmitting. I can see this in ifconfig output, for example, where the TX field is always 0. However, it is receiving, and tcpdump shows ARP requests coming from the ISP's gateway on the other side. The interface is physically connected to a Siemens BSTU4 modem, configured by the ISP. The link is properly set to 10MBps and full duplex, without negotation, as the ISP requested. A small /30 subnet is configured. For the sake of anonimity, let's say the machine is 3.3.3.2/30, and the ISP's gateway .1. The machine has no firewall settings whatsoever. Even running something like arping -I eth1 3.3.3.1, and running tcpdump alongside, shows no traffic whatsoever being transmitted on the interface. (But the other side keeps steadily sending ARP requests, and that is all that can be seen.) What could be causing this? Here's some output, anonymized, which may hopefully help: $ ethtool eth1 Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised auto-negotiation: No Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off Supports Wake-on: d Wake-on: d Link detected: yes $ ip link show eth1 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:15:c5:xx:xx:xx brd ff:ff:ff:ff:ff:ff $ ip -4 addr show eth1 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 inet 3.3.3.2/30 brd 3.3.3.3 scope global eth1 $ ip -4 route show match 3.3.3.0/30 3.3.3.0/30 dev eth1 proto kernel scope link src 3.3.3.2 default via 10.0.0.5 dev eth0

    Read the article

  • mixing different technologies using ARR reverse proxy

    - by Jaepetto
    I'm currently trying to put together a proof of concept on mixing various technologies onto one web site in order to ease migrations and add flexibility. The idea is to create one 'mashup' site behind an IIS 7.5 ARR reverse proxy. For the time being the ARR reverse proxy forwards all request to our main site. The request are as follow: client -> ARR: Get / ARR -> Server 1: Get / Server 1 -> ARR: 200: /index.htm ARR -> client: 200: /index.htm ...so far so good. Let's say, I want to add a new site (root of another server) as a subsite of my main website. a simple inbound rule does the trick: <rule name="sub1" stopProcessing="true"> <match url="^mySubsite(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false" /> <action type="Rewrite" url="http://server2/{R:1}" /> </rule> The requests now are: client -> ARR: Get /mySubsite ARR -> Server 2: Get / Server 2 -> ARR: 200: /index.htm ARR -> client: 200: /index.htm ... still ok. The issue comes when the site on server2 sends a redirection (e.g. to a login page). In the case of SharePoint, it will redirect the user to: /_layouts/Authenticate.aspx?Source=%2F ...which does not exists: client -> ARR: Get /mySubsite ARR -> Server 2: Get / Server 2 -> ARR: 301: /_layouts/Authenticate.aspx?Source=%2F ARR -> client: 301: /_layouts/Authenticate.aspx?Source=%2F client -> ARR: Get /_layouts/Authenticate.aspx?Source=%2F ARR -> client: 404: Not Found Does anyone know a way write the outbound rule to rewrite the response from server 2 "301: /_layouts/Authenticate.aspx?Source=%2F" to "301: /mySubsite/_layouts/Authenticate.aspx?Source=%2FmySubsite%2F"?

    Read the article

  • Lighttpd with FastCGI configuration running ViewVC - rewrite problems

    - by 0xC0000022L
    At the moment I am struggling with the configuration of lighttpd together with ViewVC. The configuration was ported from Apache 2.2.x, which is still running on the machine, serving the WebDAV/SVN stuff, being proxied through. Now, the problem I am having appears to be with the rewrite rules and I'm not really sure what I am missing here. Here's my configuration (slightly condensed to keep it concise): var.hgwebfcgi = "/var/www/vcs/bin/hgweb.fcgi" var.viewvcfcgi = "/var/www/vcs/bin/wsgi/viewvc.fcgi" var.viewvcstatic = "/var/www/vcs/templates/docroot" var.vcs_errorlog = "/var/log/lighttpd/error.log" var.vcs_accesslog = "/var/log/lighttpd/access.log" $HTTP["host"] =~ "domain.tld" { $SERVER["socket"] == ":443" { protocol = "https://" ssl.engine = "enable" ssl.pemfile = "/etc/lighttpd/ssl/..." ssl.ca-file = "/etc/lighttpd/ssl/..." ssl.use-sslv2 = "disable" setenv.add-environment = ( "HTTPS" => "on" ) url.rewrite-once += ("^/mercurial$" => "/mercurial/" ) url.rewrite-once += ("^/$" => "/viewvc.fcgi" ) alias.url += ( "/viewvc-static" => var.viewvcstatic ) alias.url += ( "/robots.txt" => var.robots ) alias.url += ( "/favicon.ico" => var.favicon ) alias.url += ( "/mercurial" => var.hgwebfcgi ) alias.url += ( "/viewvc.fcgi" => var.viewvcfcgi ) $HTTP["url"] =~ "^/mercurial" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.hgwebfcgi, "socket" => "/tmp/hgwebdir.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } else $HTTP["url"] =~ "^/viewvc\.fcgi" { fastcgi.server += ( ".fcgi" => ( ( "bin-path" => var.viewvcfcgi, "socket" => "/tmp/viewvc.sock", "min-procs" => 1, "max-procs" => 5 ) ) ) } expire.url = ( "/viewvc-static" => "access plus 60 days" ) server.errorlog = var.vcs_errorlog accesslog.filename = var.vcs_accesslog } } Now, when I access the domain.tld, I correctly see the index of the repositories. However, when I look at the links for each respective repository (or click them, for that matter), it's of the form https://domain.tld/viewvc.fcgi/reponame instead of the intended https://domain.tld/reponame. What do I have to change/add to achieve this? Do I have to "abuse" the index file mechanism somehow? Goal is to keep the /mercurial alias functional. So far I've tried sifting through the lighttpd book from Packt again, also through the lighttpd documentation, but found nothing that seemed to match the problem.

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • joining text files with 600M+ lines

    - by dnkb
    I have two files huge.txt and small.txt. Huge has around 600M rows and it's 14Gigs, each line has four space separated words (tokens) and finally another space separated column with a number. Small has 150K rows with a size of ~3M, a space separated word and a number. Both Files are sorted using the sort command, with no extra options. The words in both files may include apostrophes (') and dashes (-). The deisred output would contain all columns from the huge.txt and the second column (the number) from small txt where the first word of huge.txt and the first word of small.txt match. My attemtpts below failed miserably with the following error: cat huge.txt|join -o 1.1 1.2 1.3 1.4 2.2 - small.txt > output.txt join: memory exhausted What I suspect is that the sorting order isn't right somehow even though the files are pre-sorted using: sort -k1 huge.unsorted.txt > huge.txt sort -k1 small.unsorted.txt > small.txt Problems seem to appear around words that have apostrophes (') or dashes (-). I also tried dictinoary sorting using the -d option bumping into the same error at the end. I see two ways out of this but don't know how to implement any of them. 1) Any tips how to sort the files in a way that the join command considers them to be sorted properly? 2) I was thinking of calculating MD5 or some other hashes of the strings to get rid of the apostrophes and dashes, and do the sorting and joining with the hashes instead of the strings themselves an dat the "translate" back the hashes to strings, but leave the numbers intact at the end of the lines. Since there would be only 150K hashes it's not that bad. What would be a good way to calculate individual hashes for each of the strings? Some AWK magic? See file samples at the end. Thank you! sample of huge.txt had stirred me to 46 had stirred my corruption 57 had stirred old emotions 55 had stirred something in 69 had stirred something within 40 sample of small.txt caley 114881 calf 2757974 calfed 137861 calfee 71143 calflora 154624 calfskin 148347 calgary 9416465 calgon's 94846

    Read the article

  • route http and ssh traffic normally, everything else via vpn tunnel

    - by Normadize
    I've read quite a bit and am close, I feel, and I'm pulling my hair out ... please help! I have an OpenVPN cliend whose server sets local routes and also changes the default gw (I know I can prevent that with --route-nopull). I'd like to have all outgoing http and ssh traffic via the local gw, and everything else via the vpn. Local IP is 192.168.1.6/24, gw 192.168.1.1. OpenVPN local IP is 10.102.1.6/32, gw 192.168.1.5 OpenVPN server is at {OPENVPN_SERVER_IP} Here's the route table after openvpn connection: # ip route show table main 0.0.0.0/1 via 10.102.1.5 dev tun0 default via 192.168.1.1 dev eth0 proto static 10.102.1.1 via 10.102.1.5 dev tun0 10.102.1.5 dev tun0 proto kernel scope link src 10.102.1.6 {OPENVPN_SERVER_IP} via 192.168.1.1 dev eth0 128.0.0.0/1 via 10.102.1.5 dev tun0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 1 This makes all packets go via to the VPN tunnel except those destined for 192.168.1.0/24. Doing wget -qO- http://echoip.org shows the vpn server's address, as expected, the packets have 10.102.1.6 as source address (the vpn local ip), and are routed via tun0 ... as reported by tcpdump -i tun0 (tcpdump -i eth0 sees none of this traffic). What I tried was: create a 2nd routing table holding the 192.168.1.6/24 routing info (copied from the main table above) add an iptables -t mangle -I PREROUTING rule to mark packets destined for port 80 add an ip rule to match on the mangled packet and point it to the 2nd routing table add an ip rule for to 192.168.1.6 and from 192.168.1.6 to point to the 2nd routing table (though this is superfluous) changed the ipv4 filter validation to none in net.ipv4.conf.tun0.rp_filter=0 and net.ipv4.conf.eth0.rp_filter=0 I also tried an iptables mangle output rule, iptables nat prerouting rule. It still fails and I'm not sure what I'm missing: iptables mangle prerouting: packet still goes via vpn iptables mangle output: packet times out Is it not the case that to achieve what I want, then when doing wget http://echoip.org I should change the packet's source address to 192.168.1.6 before routing it off? But if I do that, the response from the http server would be routed back to 192.168.1.6 and wget would not see it as it is still bound to tun0 (the vpn interface)? Can a kind soul please help? What commands would you execute after the openvpn connects to achieve what I want? Looking forward to hair regrowth ...

    Read the article

  • Two-Hop SSH connection with two separate public keys

    - by yigit
    We have the following ssh hop setup: localhost -> hub -> server hubuser@hub accepts the public key for localuser@localhost. serveruser@server accepts the public key for hubuser@hub. So we are issuing ssh -t hubuser@hub ssh serveruser@server for connecting to server. The problem with this setup is we can not scp directly to the server. I tried creating .ssh/config file like this: Host server user serveruser port 22 hostname server ProxyCommand ssh -q hubuser@hub 'nc %h %p' But I am not able to connect (yigit is localuser): $ ssh serveruser@server -v OpenSSH_6.1p1, OpenSSL 1.0.1c 10 May 2012 debug1: Reading configuration data /home/yigit/.ssh/config debug1: /home/yigit/.ssh/config line 19: Applying options for server debug1: Reading configuration data /etc/ssh/ssh_config debug1: Executing proxy command: exec ssh -q hubuser@hub 'nc server 22' debug1: permanently_drop_suid: 1000 debug1: identity file /home/yigit/.ssh/id_rsa type 1000 debug1: identity file /home/yigit/.ssh/id_rsa-cert type -1 debug1: identity file /home/yigit/.ssh/id_dsa type -1 debug1: identity file /home/yigit/.ssh/id_dsa-cert type -1 debug1: identity file /home/yigit/.ssh/id_ecdsa type -1 debug1: identity file /home/yigit/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA cb:ee:1f:78:82:1e:b4:39:c6:67:6f:4d:b4:01:f2:9f debug1: Host 'server' is known and matches the ECDSA host key. debug1: Found key in /home/yigit/.ssh/known_hosts:33 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/yigit/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /home/yigit/.ssh/id_dsa debug1: Trying private key: /home/yigit/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey). Notice that it is trying to use the public key localuser@localhost for authenticating on server and fails since it is not the right one. Is it possible to modify the ProxyCommand so that the key for hubuser@hub is used for authenticating on server?

    Read the article

  • New-ManagedContentSettings - not working properly under Exchange 2010

    - by mfinni
    I have a client that is divesting a business unit into a new AD forest, Exchange org, etc. We're using Quest tools to migrate users and mailboxes. However, I have to build the new infrastructure to match the old one. In the old one, we're using Managed Folder Mailbox Policies to limit (or allow) retention. They started with Exchange 2007 and never upgraded to Retention Policies; oh well. So, in the old environment, when you use a 2007 server to define a new Managed Content Setting, you can pick "Email" from the dropdown for MessageClass. This is a display name; the actual MessageClass values are thus: MessageClass : IPM.Note;IPM.Note.AS/400 Move Notification Form v1.0;IPM.Note.Delayed;IPM.Note.Exchange.ActiveSync.Report;IPM.Note.JournalReport.Msg;IPM.Note.JournalReport.Tnef;IPM.Note.Microsoft.Missed.Voice;IPM.Note.Rules.OofTemplate.Microsoft;IPM.Note.Rules.ReplyTemplate.Microsoft;IPM.Note.Secure.Sign;IPM.Note.SMIME;IPM.Note.SMIME.MultipartSigned;IPM.Note.StorageQuotaWarning;IPM.Note.StorageQuotaWarning.Warning;IPM.Notification.Meeting.Forward;IPM.Outlook.Recall;IPM.Recall.Report.Success;IPM.Schedule.Meeting.*;REPORT.IPM.Note.NDR If I take that and try to mangle it into a new cmdlet for Ex2010 in my new environment here's what I get New-ManagedContentSettings -Name "Delete Messages older then 90 days" -FolderName "Entire Mailbox" -RetentionEnabled $True -AgeLimitForRetention 90 -TriggerForRetention WhenDelivered -RetentionAction DeleteAndAllowRecovery -MessageClass "IPM.Note","IPM.Note.AS/400MoveNotificationFormv1.0","IPM.Note.Delayed","IPM.Note.Exchange.ActiveSync.Report","IPM.Note.JournalReport.Msg","IPM.Note.JournalReport.Tnef","IPM.Note.Microsoft.Missed.Voice","IPM.Note.Rules.OofTemplate.Microsoft","IPM.Note.Rules.ReplyTemplate.Microsoft","IPM.Note.Secure.Sign","IPM.Note.SMIME","IPM.Note.SMIME.MultipartSigned","IPM.Note.StorageQuotaWarning","IPM.Note.StorageQuotaWarning.Warning","IPM.Notification.Meeting.Forward","IPM.Outlook.Recall","IPM.Recall.Report.Success","IPM.Schedule.Meeting.*","REPORT.IPM.Note.NDR" -whatif Invoke-Command : Cannot bind parameter 'MessageClass' to the target. Exception setting "MessageClass": "The length of t he property is too long. The maximum length is 255 and the length of the value provided is 518." At C:\Users\MFinnigan.sa\AppData\Roaming\Microsoft\Exchange\RemotePowerShell\pfexcas02.fve.ad.5ssl.com\pfexcas02.fve.ad .5ssl.com.psm1:28204 char:29 + $scriptCmd = { & <<<< $script:InvokeCommand ` + CategoryInfo : WriteError: (:) [New-ManagedContentSettings], ParameterBindingException + FullyQualifiedErrorId : ParameterBindingFailed,Microsoft.Exchange.Management.SystemConfigurationTasks.NewManaged ContentSettings So, the config object can store all that mess, but I can't fit it in through the cmdlet to create the object. Lovely. Any ideas?

    Read the article

  • Server 2012 intermittently fails to respond to pings from single host, even with firewall disabled, but responds to non-ICMP requests fine

    - by James Westbury
    This one is kind of weird. I've got the following machines involved: DC01 - 10.1.2.42, Server 2012, domain controller & DNS server, physical machine nagiosv - 10.1.2.35, CentOS 6.4, Nagios, virtual machine CB01 - 10.1.3.81, Ubuntu 12.04 LTS, couchbase server, virtual machine So, I noticed something was wrong while configuring this new Nagios VM. I started seeing DC01's state flapping. I logged into nagiosv when I saw this happening, and attempted to ping DC01, both by FQDN and its IP address. Neither worked. I tried pinging the machine from CB01, which is another VM on the same virtual switch/physical NIC as nagiosv, and that worked fine. Pings still failing from nagiosv at this time. DC01 is also an internal DNS server, so I ran dig google.com from nagiosv, and was able to run a query against DC01 just fine: ;; Query time: 1 msec ;; SERVER: 10.1.2.42#53(10.1.2.42) ;; WHEN: Fri Nov 1 07:53:51 2013 ;; MSG SIZE rcvd: 204 Pings still failing from nagiosv, though. I can ping from DC01 to nagiosv, and that works, and I can still ping from other VMs on the same physical NIC into DC01, and that works. I should mention at this point that I've disabled the firewall on DC01 for testing purposes, and it doesn't make a damned bit of difference. (Even with the firewall enabled, I have a blanket exception for ICMP from the local subnet, so it shouldn't make a difference, but I figured I should test it anyway.) I loaded up Wireshark on DC01 and pinged it from nagiosv again. What I see is a bunch of echo requests coming in and not a single reply going back out. Filtered results here, showing all ICMP traffic during a 15-second period. A few more bits of info: There are no IP conflicts on the network. MAC addresses on the incoming pings match the MAC on the VM. There are no duplicate MACs on the network, as far as I can see. I have absolutely no idea why DC01 is failing to respond, here. Any ideas?

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • git private server error: "Permission denied (publickey)."

    - by goddfree
    I followed the instructions here in order to set up a private git server on my Amazon EC2 instance. However, I am having problems when trying to SSH into the git account. Specifically, I get the error "Permission denied (publickey)." Here are the permissions of my files/folders on the EC2 server: drwx------ 4 git git 4096 Aug 13 19:52 /home/git/ drwx------ 2 git git 4096 Aug 13 19:52 /home/git/.ssh -rw------- 1 git git 400 Aug 13 19:51 /home/git/.ssh/authorized_keys Here are the permissions of my files/folders on my own computer: drwx------ 5 CYT staff 170 Aug 13 14:51 .ssh -rw------- 1 CYT staff 1679 Aug 13 13:53 .ssh/id_rsa -rw-r--r-- 1 CYT staff 400 Aug 13 13:53 .ssh/id_rsa.pub -rw-r--r-- 1 CYT staff 1585 Aug 13 13:53 .ssh/known_hosts When checking my logs in /var/log/secure, I used to get the following error message every time I tried to SSH: Authentication refused: bad ownership or modes for file /home/git/.ssh/authorized_keys However, after making a few permission changes, I no longer get this error message. Despite this, I am still getting the "Permission denied (publickey)." message every time I try to SSH. The command I am using to SSH is ssh -T git@my-ip. Here is the full log I get when I run ssh -vT [email protected]: OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to my-ip [my-ip] port 22. debug1: Connection established. debug1: identity file /Users/CYT/.ssh/id_rsa type -1 debug1: identity file /Users/CYT/.ssh/id_rsa-cert type -1 debug1: identity file /Users/CYT/.ssh/id_dsa type -1 debug1: identity file /Users/CYT/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2 debug1: match: OpenSSH_6.2 pat OpenSSH* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr [email protected] none debug1: kex: client->server aes128-ctr [email protected] none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 08:ad:8a:bc:ab:4d:5f:73:24:b2:78:69:46:1a:a5:5a debug1: Host 'my-ip' is known and matches the RSA host key. debug1: Found key in /Users/CYT/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /Users/CYT/.ssh/id_rsa debug1: Trying private key: /Users/CYT/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). I have spent a few hours going through threads on various sites, including SO and SF, looking for a solution. It seems that the permissions for my files are all okay, but I just can't figure out the problem. Any help would be greatly appreciated. Edit: EEAA: Here are the outputs you requested: $ getent passwd git git:x:503:504::/home/git:/bin/bash $ grep ssh ~git/.ssh/authorized_keys | wc -l grep: /home/git/.ssh/authorized_keys: Permission denied 0

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >