Search Results

Search found 9696 results on 388 pages for 'proxy authentication'.

Page 230/388 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • OpenSwan + xl2tpd VPN: How can I share Internet connection

    - by Michael
    I have an OpenSwan IPSec + L2TP VPN on Linux setup working from off of my server so I can connect to it from my laptop (roadwarrior setup). I am able to connect to the VPN remotely just fine, however the internet connection is not shared. I'm assuming there is some sort of masquerading I am supposed to be doing, but I have no idea how to go about doing that (iptables?). Any help getting this working so I can essentially use my VPN connection as a proxy would be great. Thanks

    Read the article

  • VirtualBox no network access

    - by Frantumn
    I'm on a work machine, setting up a virtual ubuntu image using virtual box. After I installed the image, I can't seem to connect to the internet. If I look at network and sharing center on my host OS (W7) I see that the VirtualBox Host-Only Network reads as "no network access" How can I set it up so that it uses the same network as the host OS. UPDATE! Okay, is there a way I can tell virtual box host-only network to use a proxy script?

    Read the article

  • SSH dns issue giving break-in error

    - by psion
    Address ..*.* maps to ec2---*-*.compute-1.amazonaws.com, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT! I keep getting this when I try to log-in to my remote server. I have it set for key authentication and when this error comes through, I still have to push through the password. I want to use this for automated Git pulls, and I can't have this kind of error message. anybody know what is going on here and how to fix it?

    Read the article

  • How to remote desktop into a sleeping workstation?

    - by Jake
    If an office workstation is turned on, either in wake/sleep/logged off/logged in mode, is it possible to remote desktop into that workstation given that I have full admin priviledge to the Windows Server and Active Directory governing the authentication? What settings is required to check the status of the workstation as well as remote desktop sucessfully? Thanks. NB: There is no one in office. I connect via VPN to the Windows Sever.

    Read the article

  • nginx rewrite for /blah/(.*) /$1

    - by skrewler
    I'm migrating from mod_php to nginx. I got everything working except for this rewrite.. I'm just not familiar enough with nginx configuration to know the correct way to do this. I came up with this by looking at a sample on the nginx site. server { server_name test01.www.myhost.com; root /home/vhosts/my_home/blah; access_log /var/log/nginx/blah.access.log; error_log /var/log/nginx/blah.error.log; index index.php; location / { try_files $uri $uri/ @rewrites; } location @rewrites { rewrite ^ /index.php last; rewrite ^/ht/userGreeting.php /js/iFrame/index.php last; rewrite ^/ht/(.*)$ /$1 last; rewrite ^/userGreeting.php$ /js/iFrame/index.php last; rewrite ^/a$ /adminLogin.php last; rewrite ^/boom\/(.*)$ /boom/index.php?q=$1 last; rewrite ^favicon.ico$ favico_ry.ico last; } # This block will catch static file requests, such as images, css, js # The ?: prefix is a 'non-capturing' mark, meaning we do not require # the pattern to be captured into $1 which should help improve performance location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } include php.conf; } The issue I'm having is with this rewrite: rewrite ^ht\/(.*)$ /$1 last; 99% of requests that will hit this rewrite are static files. So I think maybe it's getting sent to the static files section and that's where things are being messed up? I tried adding this but it didn't work: location ~* ^ht\/.*\.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } Any help would be appreciated. I know the best thing to do would be to just change the references of /ht/whatever.jpg to /whatever.jpg in the code.. but that's not an option for now.

    Read the article

  • Two-Hop SSH connection with two separate public keys

    - by yigit
    We have the following ssh hop setup: localhost -> hub -> server hubuser@hub accepts the public key for localuser@localhost. serveruser@server accepts the public key for hubuser@hub. So we are issuing ssh -t hubuser@hub ssh serveruser@server for connecting to server. The problem with this setup is we can not scp directly to the server. I tried creating .ssh/config file like this: Host server user serveruser port 22 hostname server ProxyCommand ssh -q hubuser@hub 'nc %h %p' But I am not able to connect (yigit is localuser): $ ssh serveruser@server -v OpenSSH_6.1p1, OpenSSL 1.0.1c 10 May 2012 debug1: Reading configuration data /home/yigit/.ssh/config debug1: /home/yigit/.ssh/config line 19: Applying options for server debug1: Reading configuration data /etc/ssh/ssh_config debug1: Executing proxy command: exec ssh -q hubuser@hub 'nc server 22' debug1: permanently_drop_suid: 1000 debug1: identity file /home/yigit/.ssh/id_rsa type 1000 debug1: identity file /home/yigit/.ssh/id_rsa-cert type -1 debug1: identity file /home/yigit/.ssh/id_dsa type -1 debug1: identity file /home/yigit/.ssh/id_dsa-cert type -1 debug1: identity file /home/yigit/.ssh/id_ecdsa type -1 debug1: identity file /home/yigit/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA cb:ee:1f:78:82:1e:b4:39:c6:67:6f:4d:b4:01:f2:9f debug1: Host 'server' is known and matches the ECDSA host key. debug1: Found key in /home/yigit/.ssh/known_hosts:33 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/yigit/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /home/yigit/.ssh/id_dsa debug1: Trying private key: /home/yigit/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey). Notice that it is trying to use the public key localuser@localhost for authenticating on server and fails since it is not the right one. Is it possible to modify the ProxyCommand so that the key for hubuser@hub is used for authenticating on server?

    Read the article

  • How would I recognize the "spoon-feeding problem" on a dynamic webapp server?

    - by Don Spaulding
    The "spoon-feeding problem", as it was recently explained to me, happens when connections to your application server are tied up feeding data across slow network connections to your clients. This makes sense to me and now I understand the importance of putting a highly-concurrent proxy in front of my app servers. My question is, how did the first person to recognize this problem figure it out? What *nix tools and troubleshooting techniques would help me to recognize this problem if I hadn't had it explained to me?

    Read the article

  • Forward spam is dangerous for my domain repute?

    - by Memiux
    I have Postfix with spamassassin and forward the emails (including spam) to gmail.com, my problem is that when I send "legitimate" emails to gmail.com it is marked as spam, I've done everything that the guidelines said like signing with DKIM, setup a SPF for my domains, require authentication for outbound mails, etc. Now I wonder what I'm doing wrong?

    Read the article

  • Does Tomcat or Jetty cache dynamic content?

    - by Continuation
    I'm working on a Servlet app with contents that are updated periodically. Hence, between updates any dynamic pages generated by the Servlet can be cached. Does Tomcat or Jetty (or any Servlet container) offer a way to cache dynamically generated pages? Or would I need to use a caching reverse proxy like Squid to accomplish that?

    Read the article

  • how to change zone of a extented webapplication-SharePoint?

    - by Ryan
    I have extended a webapplication and set its zone to Intranet but now we decided to make that site anonymous I made all the settings in authentication provider and site collection settings but its still showing its zone as Intranet . I read that its best practice to keep Internet as the zone for anonymous access.. How can I change it now and does it affect leaving it as Intranet?

    Read the article

  • Share the same subnet between Internal network and VPN Clients

    - by Pascal
    I would like to set up a configuration where VPN clients connecting to my Forefront TMG can access all the resources of my Internal network without having the to use the option "Use default gateway on remote network" on the VPN's TCP/IP Ipv4 Advanced Settings. This is important to me, since they can use their own internet while accessing my network through VPN (the security implications of this are acceptable on my cenario) My Internal network runs on 10.50.75.x, and I set up Forefront TMG to relay the DHCP of my Internal network to the VPN clients, so they get IPs from the same range as the Internal network. This setup initially works, and the VPN clients use their own internet, and can access anything that is on the internal network. However, after a while, HTTP Proxy Traffic from the Internal network starts getting routed to the IP of the RRAS Dial In Interface, instead of the IP of the Internal's network gateway. When this happens, the HTTP Proxy starts getting denied for obvious reasons. My first question is: does this happen because Forefront TMG wasn't designed to handle a cenario that I described above, and it "loses itself"? My second question is: Is there any way to solve this problem, either through configuration or firewall policies? My third question is: If there's no way that it can work with the cenario above, is there another cenario that will solve my problem, and do what I'd like it to do properly? Below are my network routes: 1 => Local Host Access => Route => Local Host => All Networks 2 => VPN Clients to Internal Network => Route => VPN Clients => Internal 3 => Internet Access => NAT => Internal, Perimeter, VPN Clients => External 4 => Internal to Perimeter => Route => Internal, VPN Clients => Perimeter Tks!

    Read the article

  • What at the Best Practices and tools for managing Windows Desktops from a linux sever ?

    - by JJ
    I know this is a loaded question! What are the best ways to manage Windows (2000, XP, Vista, Win7) workstation from a centralized linux server. I would like to replace the fuctionaility of MS SBS Server with a linux box. The following issues would need to be addressed. File Sharing Authentication, Authorization, and Access Control Software Installation Centralized Login Script Centralized Backup

    Read the article

  • Setting http auth type in phpMyAdmin on Debian

    - by Daniel Hollands
    I'm trying to set-up the fresh phpMyAdmin install on my Debian 6 server to use http authentication rather than the cookie based auth that is default when it is installed. To do this, I edited the $cfg['Servers'][$i]['auth_type'] line to use 'http' as its setting in /etc/phpmyadmin/config.inc.php, and restarted the server, but the setting seems to be being ignored, as when I goto phpmyadmin, it is still offering up the regular login box. I've done this twice before (once on debian and once on ubuntu), so I'm not sure why it isn't working this time. Thank you

    Read the article

  • TrueCrypt System Favorite Volume doesn't mount automatically on boot

    - by Anders Hovgaard
    I've encrypted my system partition using TrueCrypt and I've read that I can mount my encrypted data partition (TrueCrypt volume) on boot by making it a "System Favorite" and giving it the same password as the system partition. However it doesn't work and I have to mount it manually every time. See this example. I've tried enabling "Cache pre-boot authentication password in driver memory (for mounting of non-system volumes)" in System - Settings, but that didn't change anything either. Any ideas?

    Read the article

  • What are your tricks for optimizing your Subversion configuration?

    - by Scott Markwell
    For a Linux or Windows system, what tricks do you do to optimize your Subversion server? The following are my current tricks for a Linux system serving over Apache with HTTPS and backed by Active Directory using LDAP authentication. Enabling KeepAlive on Apache Disable SVNPathAuthz Increase LDAP Cache Using the FSFS storage method instead of BDB Feel free to call this into question. I don't have hard proof that FSFS out performs BDB, only lots of tribal knowledge and hearsay.

    Read the article

  • Passwords longer than 8 letter in Red Hat 4

    - by Oz123
    I have some machines with RHEL4 Nahant Update 6. Oddly, I found that passwords longer than 8 digits are not stored. So if I had a password 1ABCDEa!, and I changed it to 1ABCDEa!1ABCDEa! I could still log in to the machine with the old password. This machines use NIS authentication, but other machines with Red Hat 5 which use the same NIS server allow login ONLY with the NEW password (16 digits long...)!

    Read the article

  • Is there an application to open links on another computer?

    - by kbyrd
    I'm connecting to another computer via RDP. I would like to click on links inside my RDP session and have the links open in a browser on my client computer. It feels like I could install some application on both ends and have them communicate over TCP and proxy the URL opening. Does something like this exist?

    Read the article

  • Apache certificates for some urls not working

    - by Vegaasen
    We are having a rather strange problem with a Apache-installation. Here is a short summary: Currently I'm setting up Apache with https, and server-certificates. This is fairly easy and works straight out of the box - as expected. This is the configuration for this setup: Listen 443 SSLEngine on SSLCertificateFile "/progs/apache/ssl/example-site.no.pem" SSLCertificateKeyFile "/progs/apache/ssl/example-site.no.key" SSLCACertificateFile "/progs/apache/ssl/ca/example_root.pem" SSLCADNRequestFile "/progs/apache/ssl/ca/example_intermediate.pem" SSLVerifyClient none SSLVerifyDepth 3 SSLOptions +StdEnvVars +ExportCertData RequestHeader set ssl-ClientCert-Subject-CN "%{SSL_CLIENT_S_DN}s" RewriteEngine On ProxyPreserveHost On ProxyRequests On SSLProxyEngine On ... <LocationMatch /secureStuff/$> SSLVerifyClient require Order deny,allow Allow from All </LocationMatch> ... <Proxy balancer://exBalancer> Header add Set-Cookie "EX_ROUTE=EB.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED BalancerMember http://10.0.0.1:7200 route=ee1 retry=300 flushpackets=off keepalive=on BalancerMember http://10.0.0.2:7200 route=ee2 retry=300 flushpackets=off keepalive=on status=+H ProxySet stickysession=EX_ROUTE scolonpathdelim=Off timeout=10 nofailover=off failonstatus=505 maxattempts=1 lbmethod=bybusyness Order deny,allow Allow from all </Proxy> RewriteCond %{REQUEST_URI} !^/index.html [NC] RewriteRule ^/(.*)$ balancer://exBalancer/$1 [P,NC] ProxyPassReverse / balancer://exBalancer/ Header edit Set-Cookie "(.*)" "$1;HttpsOnly" ... So - everything works fine and as expected for all of the pages that are not a part of the LocationMatch-directive. When requesting something that matches the LocationMatch-directive, I'm asked for a certificate (hence the SSLVerifyClient required attribute) - and getting all the correct certificates in my browser that is based on the root/intermediate chain. After choosing a certificate and clicking "OK", this is what pops up in the apache logs: [ssl:info] [pid 9530:tid 25] [client :43357] AH01998: Connection closed to child 86 with abortive shutdown ( [Thu Oct 11 09:27:36.221876 2012] [ssl:debug] [pid 9530:tid 25] ssl_engine_io.c(1171): (70014)End of file found: [client 10.235.128.55:45846] AH02007: SSL handshake interrupted by system [Hint: Stop button pressed in browser?!] And this just spams the logs. What is happening here? I can see this configuration working on my local machine, but not on one of our servers. There is no configration differences between the servers, only minor application-wise-changes. I've tried the following: 1) Removing CA-certificate-checking (works) 2) Adding required CA-certificate for the whole site (works) 3) Adding "SSLVerifyClient optional" does not work 4) ++ Server/Application Information Local: -OpenSSL v.1.0.1x -Apache 2.4.3 -Ubuntu -mpm: event -every configuration should be turned on (failing) server: -OpenSSL 0.9.8e -Apache 2.4.2 -SunOS -mpm: worker -every configuration should be turned on Please let me know if more information is needed, I'll provide it instantly. Brief sum-up: -Running apache 2.4 -Server certificates works just fine -Client certificates for some /Locations does not work, fails with errors PS: Could it be related with the OpenSSL version and the "Renegotiation" stuff related to TLS/SSLv3?

    Read the article

  • Proper SSH keys location for a system user ?

    - by Thibaut Barrère
    I have a system account with which I run a database (namely mongodb). By default it has no home. Now I'd like to trigger scp commands from that account, with ssh keys authentication to a remote server, to export backups. Should I just create a /home/mongodb and /home/mongodb/.ssh folders manually to store the SSH keys, like the default for regular users ? Is it still considered a system account after that ? Thanks!

    Read the article

  • NGINX Cache Viewstate and Cookies

    - by user42833
    We are running NGINX 7.65 on a Ubuntu 10 server. NGINX is setup as a reverse proxy to an IIS website where the viewstate is passed through the headers. We want to set up the cache feature in NGINX but need to make sure it does not mess up the viewstate and cookies associated with each individual customer. Will adding this to the nginx.conf file fix this - proxy_pass_header Header or is there more that would need to be done? Thanks,

    Read the article

  • Linux with winbind, disable local users while AD is available?

    - by Salkin
    Routers and switches with RADIUS authentication can be configured such that login is disabled for locally configured users as long as the RADIUS server is available. If the RADIUS server becomes unavailable, they fall back to allowing login as a locally configured user. Is it possible to achieve the same effect with Linux machines using winbind to authenticate Active Directory users? I have a feeling it could be done with the right PAM configuration, but I'm not very far along on the PAM learning curve...

    Read the article

  • Server to server replication and CPU and 32k\ corrupt doc

    - by nick wall
    Summary: if database contains a doc with 32K issue or corrupt, on server to server replication it causes marked increase in CPU in nserver.exe task, which effectively causes our server(s) to slow right down. We have a 5 server cluster (1 "hub" and 4 HTTP servers accessed via reverse proxy and SSO for load balancing and redundancy). All are physically located next to each other on network, they don't have dedicated network\ ports for cluster or replication. I realise IBM recommendation is dedicated port for cluster. Cluster queues are in tolerance and under heavy application user load, i.e. the maximum number of documents are being created, edited, deleted, the replication times between servers are negligible. Normally, all is well. Of the servers in the cluster, 1 is considered the "hub", and imitates a PUSH-PULL replication with it's cluster mates every 60mins, so that the replication load is taken by the hub and not cluster mates. The problem we have: every now and then we get a slow replication time from the hub to a cluster mate, sometimes up to 30mins. This maxes out the nserver.exe task on the "cluster mate" which causes it to respond to http requests very slowly. In the past, we have found that if a corrupt document is in the DB, it can have this affect, but on those occasions, the server log will show the corrupt doc noteId, we run fixup, all well. But we are not now seeing any record of corrupt docs. What we have noticed is if a doc with the 32K issue is present, the same thing can happen. Our only solution in that case is to run a : fixup mydb.nsf -V, which shows it is purging a 32K doc. Luckily we run a reverse proxy, so we can shut HTTP servers down without users noticing, but users do notice when a server has the problem! Has anyone else seen this occur? I have set up DDM event handlers for many of the replication events. I have set the replication time out limit to 5 mins (the max we usually see under full user load is 0.1min), to prevent it rep'ing for 30mins as before. This ia a temporary work around. Does anyone know of a DDM event to trap the 32K issue? we could at least then send alert. Regarding 32K issue: this prob needs another thread, but we are finding this relatively hard to find the source of the issue as the 32K event is fairly rare. Our app is fairly complex, interacting with various other external web services, with 2 way data transfer. But if we do encounter a 32K doc, we can't look at field properties, so we can't work out which field has issue which would give us a clue as to which process is culprit. As above, we run a fixup -V. Any help\ comments on this would be gratefully received.

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >