Search Results

Search found 62215 results on 2489 pages for 'http basic authentication'.

Page 135/2489 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Lemonldap::ng + OpenID error in try generate

    - by spy86
    I am trying to configure authentication by OpenID in lemonldap::ng with this When I try http://auth.example.com/openidserver/username, I see following error: Unable to load Net::OpenID::Server Base class package "Net::OpenID::Server" is empty. (Perhaps you need to 'use' the module which defines that package first, or make that module available in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 . /etc/httpd). at /usr/share/perl5/vendor_perl/Lemonldap/NG/Portal/OpenID/Server.pm line 9 BEGIN failed--compilation aborted at /usr/share/perl5/vendor_perl/Lemonldap/NG/Portal/OpenID/Server.pm line 9, line 522. Compilation failed in require at /usr/share/perl5/vendor_perl/Lemonldap/NG/Portal/IssuerDBOpenID.pm line 40, line 522. LemonLDAP::NG Lemonldap::ng works in CentOS 6.4 and server have all update's

    Read the article

  • Forcing logon to Air Watch server upon joining wifi

    - by DKNUCKLES
    I'm setting up a wireless controller that I would like to leave as unsecured. When a user connects to this network they need to be forwarded to a specific page where they can authenticate with the Air Watch system they have in place. Once authentication takes place, a profile will be downloaded to their device and we can administer the devices accordingly. I'm mulling over how I can force the page to the user when they log in. The methodology I'm thinking about working with is creating a NAT rule for that VSC that would forward all port 80 and 443 traffic to the airwatch server. Once they authenticate, a profile will be downloaded which will connect the devices to an Virtual Access Point who's SSID isn't broadcasted. Is this methodology correct or can someone think of an easier / more efficient way of accomplishing this? The controller is an HP MSM720 for what it's worth.

    Read the article

  • What is Causing this IIS 7 Web Service Sporadic Connectivity Error?

    - by dpalau
    On sporadic occasions we receive the following error when attempting to call an .asmx web service from a .Net client application: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host." By sporadic I mean that it might occur zero, once every few days, or a half-dozen times a day for some users. It will never occur for the first web service call of a user. And the subsequent (usually the same) call will always work immediately after the failure. The failures happen across a variety of methods in the service and usually happens between 15-20 seconds (according to the log) from the time of the request. Looking in the IIS site log for the particular call will show one or the other of the following windows error codes: 121: The semaphore timeout period has elapsed. 1236: The network connection was aborted by the local system. Some additional environment details: Running on internal network web farm consisting of two servers running IIS7 on Windows Server 2008 OS. These problems did not occur when running in an older IIS6 web farm of three servers running on Windows Server 2003 (and we use a single IIS6/2003 instance for our development and staging environments with no issues). EDIT: Also, all of these server instances are VMWare virtual machines, not sure if that is a surprise anymore or not. The web service is a .Net 2.0/3.5 compiled .asmx web service that has its own application pool (.Net 2.0, integrated pipeline). Only has Windows Authentication enabled. We have another web service on the farm that uses the same physical path as the primary service, the only difference being that Basic Authentication is enabled. This is used for a portion of our ERP system. Have tried using the same and different application pool - no effect on the error. This site isn't hit as often as the primary site and has never had an error. As mentioned, the error will only happen when called from the .Net client - not from other applications. The client application is always creating a new web service object for each request and setting the service credentials to System.Net.CredentialCache.DefaultCredentials. The application is either deployed locally to a client or run in a Citrix server session. Those users running in Citrix doesn't seem to experience the issue, only locally deployed clients. The Citrix servers and the web farm are located in the same physical location and are located in the same IP range (10.67.xx.xx). Locally deployed clients experiencing the error are located elsewhere (10.105.xx.xx, 10.31.xx.xx). I've checked the OS logs to see if I can see any problems but nothing really sticks out. EDIT: Actually, I myself just ran into the error a little bit ago. I decided to check out the logs again and saw that there was a Security log entry of "Audit Failure" at the 'same' time (IIS log entry at 1:39:59, event log entry at 1:39:50). Not sure if this is a coincidence or not, I'll have to check out the logs of previous errors. I'm probably grasping for straws but the details: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 7/8/2009 1:39:50 PM Event ID: 5159 Task Category: Filtering Platform Connection Level: Information Keywords: Audit Failure User: N/A Computer: is071019.<**.net Description: The Windows Filtering Platform has blocked a bind to a local port. Application Information: Process ID: 1260 Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Source Address: 0.0.0.0 Source Port: 54802 Protocol: 17 Filter Information: Filter Run-Time ID: 0 Layer Name: Resource Assignment Layer Run-Time ID: 36 I've also tried to use Failed Request Tracing in IIS7 but the service call never actually gets to where FRT can capture it (even though the failure is logged in the web service log). The network infrastructure group said they checked out the DNS and any NIC settings are correct so there is no 'flapping'. Everything pans out. I'm not sure that they checked out any domain controller servers though to see if that could be an issue. Any ideas? Or any other debugging strategies to get to the bottom of this? I'm just the developer in charge of the software and don't really have the knowledge on what to investigate from the networking side of things - although it does sound like a networking issue to me based on what is happening. Thanks in advance for any help.

    Read the article

  • Just installed Sql Server 2008 and cannot access it

    - by Mendy
    I just install Sql Server 2008 on a new server and I cannot connect to it. I'm trying to connect using Windows Authentication to the localhost but I get the following error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 2) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=2&LinkId=20476

    Read the article

  • Implications of allowing Windows clients to use NTLMv1?

    - by Boden
    I have a web application that I'd like to authenticate to using pass-through NTLM for SSO. There is a problem, however, in that NTLMv2 apparently will not work in this scenario (without the application storing an identical password hash). I enabled NTLMv1 on one client machine (Vista) using its local group policy: Computer-Windows Settings-Security Settings-Network Security: LAN Manager authentication level. I changed it to Send LM & NTLM - use NTLMv2 session security if negotiated. This worked, and I'm able to login to the web application using NTLM. Now this application would be used by all of my client machines... so I'm wondering what the security risks are if I was push this policy out to all of them (not to the domain controller itself though)?

    Read the article

  • apache2: Require valid-user for everything except "special_page"

    - by matt wilkie
    With Apache2 how may I require a valid user for every page except these special pages which can be seen by anybody? Thanks in advance for your thoughts. Update in response to comments; here is a working apache2 config: <Directory /var/www/> Options Indexes FollowSymLinks MultiViews Order allow,deny allow from all </Directory> # require authentication for everything not specificly excepted <Location / > AuthType Basic AuthName "whatever" AuthUserFile /etc/apache2/htpasswd Require valid-user AllowOverride all </Location> # allow standard apache icons to be used without auth (e.g. MultiViews) <Location /icons> allow from all Satisfy Any </Location> # anyone can see pages in this tree <Location /special_public_pages> allow from all Satisfy Any </Location>

    Read the article

  • Windows 7 / Windows Vista won't connect to 802.1x RADIUS Server

    - by Calvin Froedge
    I've deployed Radius and have no problems connecting with TTLS, PEAP, or MD5 using linux, mac, and windows xp. For Windows 7 and Vista, I'm never prompted with the dialog box to enter username & password after configuring 802.1x support on the client. Steps taken: Enabled Wired Autoconfig in services.msc Set to use PEAP Set to require user authentication When I enable the network connection it says "Trying to authenticate" then fails with no error log / message given. The radius server gives no indication that there was ever a request (no Access-Reject - the client simply never tries to authenticate). On the windows 7 client, I can see that the DHCP server does not assign an IP to the client when 802.1x is enabled on the client (though it does when it isn't). How can I debug this further? Has anyone else run into a similar situation? My radius server is freeradius on Ubuntu 11.10.

    Read the article

  • Rip authedicatation from LDAP to Local

    - by oxinabox
    We are taking a small portion of out network offline, and running a separate network using that portion. (By small portion I mean 2 servers, that will be connected to 30 odd boxs that aren't usually part of our network, and don't need to authenicate) I intend to create a VM on one of the servers to provide general user services, and IRC server, remote shell etc. And I would like the users to be able to use there usual server log in details. Problem is the LDAP server that normally checks those details is not one of the severs. So I need to be able to some how take their details off LDAP and put them on the the server that is coming. One suggestion I had was to set a LDAP server on the VM locally, and clone the LDAP database onto it (using something called slapcat) is this the best way? Or can I I change the LDAP data into local authentication data?

    Read the article

  • Developer Laptop with SQL Server 2008 can't login to SSIS when offsite

    - by wizlb
    When I bring my Windows XP (SP3) laptop home I can still login as my domain account because Windows caches the info necessary to authenticate me when the domain controller isn't around. However, when I try to connect to Integration Services from within SQL Server Management Studio, it generates SSPI context errors. The only way it works is if I connect to the office with VPN or if I'm at the office where the domain controller is. I have both SQL Server Agent and SQL Server Integration Services 10 running under local computer accounts. It seems that the only option to connect to Integration Services from within Management Studio is to use Window authentication. Is there any way to do this when I'm not connected to the office? Why don't these services use the cached info just like Windows Login? Thanks.

    Read the article

  • CentOS PAM+LDAP login and host attribute

    - by pianisteg
    My system is CentOS 6.3, openldap is configured well, PAM authorization works fine. But after turning pam_check_host_attr to yes, all LDAP-auths fail with message "Access denied for this host". hostname on the server returns correct value, the same value is listed in user's profile. "pam_check_host_attr no" works fine and allows everyone with correct uid/password a piece of /var/log/secure: Sep 26 05:33:01 ldap sshd[1588]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=my-host user=my-username Sep 26 05:33:01 ldap sshd[1588]: Failed password for my-username from 77.AA.BB.CC port 58528 ssh2 Sep 26 05:33:01 ldap sshd[1589]: fatal: Access denied for user my-username by PAM account configuration Another two servers (CentOS 5.7 Debian) authorizes on this LDAP server correctly. Even with pam_check_host_attr yes! I didn't edit /etc/security/access.conf, it is empty, only default comments. I don't know what to do! How to fix this?

    Read the article

  • Tell git which private key to use

    - by jrdioko
    ssh has the -i option to tell it which private key file to use when authenticating: -i identity_file Selects a file from which the identity (private key) for RSA or DSA authentication is read. The default is ~/.ssh/identity for protocol ver- sion 1, and ~/.ssh/id_rsa and ~/.ssh/id_dsa for protocol version 2. Iden- tity files may also be specified on a per-host basis in the configuration file. It is possible to have multiple -i options (and multiple identities specified in configuration files). Is there a similar way to tell git which private key file to use when on a system with multiple private keys in the .ssh directory?

    Read the article

  • Good Shibboleth tutorials our there?

    - by fgysin
    I am looking into using Shibboleth for authentication of webapplications at my organisation. I am very new to this subject and would like to read through some good tutorials, hands-on-lessons or whatever is out there to help newbies getting to know Shibboleth. But so far I have not been able to find any tutorials that contain specific examples for each steps. I would like to get a running setup up somehow so I will be able to play around with it... What I have found up to now: Official Documentation for Shibboleth 2 -- https://spaces.internet2.edu/display/SHIB2/Installation I would appreciate any hints you can give me about additional information to Shibboleth.

    Read the article

  • Implementing a form of port knocking + Phone Factor = 2 Factor auth for RDP?

    - by jshin47
    I have been looking into how to secure a publicly-available RDP endpoint and want to implement our two-factor authentication RADIUS server, PhoneFactor. I would like to implement the following process: User opens up web app in browser In web app, user enters username + password, initiates RADIUS auth Phone factor calls user to complete auth Once user is authenticated, port 3389 is opened on user's IP on pfSense firewall. After some amount of time, firewall rule is removed for that IP I would like to know the following: Is this a typical setup? If it is a bad idea, please explain why. If it is possible, are there any packages that assist with this? Specifically, the third step, where the appropriate firewall rule would need to be added... Edit: I am aware of TS Web Gateway, but I want the users to be able to use the traditional RDP client...

    Read the article

  • How to send connection type (SSH|Telnet) info in Radius Access Requests on Cisco router?

    - by Gianni Costanzi
    I've configured the following on a cisco router: aaa authentication login default group radius local ! radius-server host x.x.x.x auth-port 1012 acct-port 1013 radius-server host y.y.y.y auth-port 1012 acct-port 1013 radius-server retransmit 1 radius-server timeout 3 radius-server key 7 xxxxxxxxx I'd like to be able to specify some radius options in order to add information about the type of connection for which a user is being authenticated, i.e. I'd like the radius server to receive in the Cisco Router's Radius Access Request information about the connection being SSH or Telnet.. I'd like to find something that automatically adds this info in the access request, without specific configurations on VTY lines dedicated to SSH and to Telnet. Any idea about that?

    Read the article

  • Apache ProxyPass/ProxyPassReverse to IIS

    - by Dana
    We have an ASP.NET web application which is mapped to a folder on an apache hosted php site using ProxyPass.ProxyPassReverse. A couple of problems being encountered. cookies are being lost which breaks the site navigation, this can be overcome by setting the asp app as cookieless. Forms authentication is used on the ASP site, this is also broken withe the proxypass in place, suspect this is cookie related also. ASP site works ok when run from a domain/ip address. Use of a separate domain / sub-domain is not an option duew to client requirements.

    Read the article

  • Picking up a lot of failed authentications for various accounts

    - by Josh K
    My server is getting a lot of various failed authentication attempts for various accounts. The most common one (that I've seen ) or the root account. I have since enabled Fail2Ban and ran several rootkit / malware checks to ensure I wasn't compromised. Is there anything else I should do? I only have three accounts enabled, and SSH access for only two. I have a full 48hr ban on anyone making more then six failed SSH login attempts. I do not have FTP enabled.

    Read the article

  • Unable to set NTFS permissions for ApplicationPoolIdentity on Windows 2008 SP2

    - by Kev
    On Windows 2008 R2 I am able to set NTFS permissions for an application pool's synthesised ApplicationPoolIdentity account thus: ICACLS d:\websites\site1\www /grant "IIS AppPool\site1":(CI)(OI)(M) The website's application pool is named site1 and is configured to run as ApplicationPoolIdentity. The site's authentication is also configured to authenticate as ApplicationPoolIdentity. I've done this a thousand times on Windows 2008 Standard Edition R2 with never a hitch. However if I try to do the same in Windows 2008 Standard Edition SP2 I get the error: IIS AppPool\site1: No mapping between account names and security IDs was done. Successfully processed 0 files; Failed processing 1 files I also notice that this fails if I try to set permissions for the application pool identity via the security GUI as well. I've seen this before and a reboot has cleared this issue but I'd like to know why this happens periodically. Googling around suggests other folks have hit this problem but there's never a satisfactory explanation. Why would this be?

    Read the article

  • Chrome - Why am I automatically authenticated to a web app even after clearing browser cookies?

    - by Howiecamp
    I am accessing a web application using Chrome. If I sign out of the app and clear all Chrome history/cookies/etc (even Flash cookies which are now handled by Chrome in the same Clear History area) and then re-access the site, I am automatically logged in without being prompted for credentials. I then launched Chrome in Incognito mode and was able to reproduce the same behavior. However, the I was prompted upon the first logon while in Incognito mode. The web application behaves as expected in Internet Explorer 10. Some info about the application: It's a Sharepoint site using NTLM authentication The credentials are Active Directory-based, as the username is domain\username My connection is over the Internet and there is no AD relationship between my local Windows account, my Windows PC. In other words I (meaning my locally logged on user and my PC) are not in any way part of their AD domain. The site is running SSL on port 443 Why might Chrome be automatically authenticating me?

    Read the article

  • duplicate cache pages: Varnish

    - by Sukhjinder Singh
    Recently we have configured Varnish on our server, it was successfully setup but we noticed that if we open any page in multiple browsers, the Varnish send request to Apache not matter page is cached or not. If we refresh twice on each browser it creates duplicate copies of the same page. What exactly should happen: If any page is cached by Varnish, the subsequent request should be served from Varnish itself when we are opening the same page in browser OR we are opening that page from different IP address. Following is my default.vcl file backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { if( req.url ~ "^/search/.*$") { }else { set req.url = regsub(req.url, "\?.*", ""); } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (!req.backend.healthy) { unset req.http.Cookie; } set req.grace = 6h; if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } if (req.http.Cookie) { set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { unset req.http.Cookie; } else { return (pass); } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") {return(pipe);} /* Non-RFC2616 or CONNECT which is weird. */ if (req.request != "GET" && req.request != "HEAD") { return (pass); } if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm remove req.http.Accept-Encoding; } } return (lookup); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } sub vcl_fetch { if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } set beresp.grace = 6h; } sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_pipe { set req.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") {ban_url(req.url); error 200 "Purged";} if (!obj.ttl > 0s) {return(pass);} } sub vcl_miss { if (req.request == "PURGE") {error 200 "Not in cache";} }

    Read the article

  • How to set up custom 401 error page or redirect in WSS3 SP2

    - by Stacy Vicknair
    I've got a WSS3 sharepoint site that requires windows authentication both in IIS and via the Sharepoint site. What I would like to do is in the case that a user does not provide valid AD credentials they are redirected to a custom error page. Currently, if the user immediately hits cancel when prompthey will see a plain text response of "401 UNAUTHORIZED". If they make an attempt and then hit cancel they instead see a blank page. I have looked into several options such as customErrors, httpModule interception (only saw examples for this after the user is authenticated), IIS Url rewrites (didn't see how this could help). Is there a good way to do this?

    Read the article

  • How to control remote access to Sonicwall VPN beyond passwords?

    - by pghcpa
    I have a SonicWall TZ-210. I want an extremely easy way to limit external remote access to the VPN beyond just username and password, but I do not wish to buy/deploy a OTP appliance because that is overkill for my situation. I also do not want to use IPSec because my remote users are roaming. I want the user to be in physical possession of something, whether that is a pre-configured client with an encrypted key or a certificate .cer/.pfx of some sort. SonicWall used to offer "Certificate Services" for authentication, but apparently discontinued that a long time ago. So, what is everyone using in its place? Beyond the "Fortune 500" expensive solution, how do I limit access to the VPN to only those users who have possession of a certificate file or some other file or something beyond passwords? Thanks.

    Read the article

  • Managing service passwords with Puppet

    - by Jeff Ferland
    I'm setting up my Bacula configuration in Puppet. One thing I want to do is ensure that each password field is different. My current thought is to hash the hostname with a secret value that would ensure each file daemon has a unique password and that password can be written to both the director configuration and the file server. I definitely don't want to use one universal password as that would permit anybody who might compromise one machine to get access to any machine through Bacula. Is there another way to do this other than using a hash function to generate the passwords? Clarification: This is NOT about user accounts for services. This is about the authentication tokens (to use another term) in the client / server files. Example snippet: Director { # define myself Name = <%= hostname $>-dir QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 3 Password = "<%= somePasswordFunction =>" # Console password Messages = Daemon }

    Read the article

  • Apache ProxyPass/ProxyPassReverse to IIS

    - by Dana
    We have an ASP.NET web application which is mapped to a folder on an apache hosted php site using ProxyPass.ProxyPassReverse. A couple of problems being encountered. cookies are being lost which breaks the site navigation, this can be overcome by setting the asp app as cookieless. Forms authentication is used on the ASP site, this is also broken withe the proxypass in place, suspect this is cookie related also. ASP site works ok when run from a domain/ip address. Use of a separate domain / sub-domain is not an option duew to client requirements.

    Read the article

  • Cannot access folder locally, but can remotely

    - by Cylindric
    I'm having a peculiar problem on one of my servers at the moment, which seems to be related to authentication in some way, but I have no idea how to find the root of the problem. I have a folder on the server D:\Somefolder\Logs. If I am connected to the server via an RDP terminal, so essentially "local", I cannot access that folder - I get an access denied. If I am connecting from my machine to it using the share \server\d$\Somefolder\Logs, I can access it. I'm logging in to both machines as the same user. Permissions on the folder seem quite simple, CREATOR OWNER, SYSTEM and Administrators. I am a Domain Admin, and they are in the local "Administrators" group. It is also affecting things like access to SQL Server, so I don't think it's a simple folder-permissions thing. For example, I cannot connect SQL Management Studio to all the local SQL instances using a domain account, but I can if I connect remotely to it.

    Read the article

  • iptables, allow access from certain MAC addresses

    - by user788171
    Presently, I limit which clients can access my server by using IP addresses via iptables, only approved IP addresses can connect. However, the problem with this is if a client is on a laptop and goes to a different location, they can no longer connect because the IP has changed. For a variety of reasons, iptables authentication is the only option I have. Is there a way to restrict access by device instead of ip address. For instance, only allow certain MAC address to connect to port 5000. Is it possible to do this via iptables? Note, the computers are not on the same network, they could be connecting from anywhere in the world.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >