Search Results

Search found 9696 results on 388 pages for 'proxy authentication'.

Page 55/388 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • configure a Cisco ASA to use MS-CHAP v2 for RADIUS authentication

    - by DrStalker
    Cisco ASA5505 8.2(2) Windows 2003 AD server We want to configure our ASA (10.1.1.1) to authenticate remote VPN users through RADIUS on the Windows AD controller (10.1.1.200) We have the following entry on the ASA: aaa-server SYSCON-RADIUS protocol radius aaa-server SYSCON-RADIUS (inside) host 10.1.1.200 key ***** radius-common-pw ***** When I test a login using the account COMPANY\username I see the users credentials are correct in the security log, but I get the following in the windows system logs: User COMPANY\myusername was denied access. Fully-Qualified-User-Name = company.com/CorpUsers/AU/My Name NAS-IP-Address = 10.1.1.1 NAS-Identifier = <not present> Called-Station-Identifier = <not present> Calling-Station-Identifier = <not present> Client-Friendly-Name = ASA5510 Client-IP-Address = 10.1.1.1 NAS-Port-Type = Virtual NAS-Port = 7 Proxy-Policy-Name = Use Windows authentication for all users Authentication-Provider = Windows Authentication-Server = <undetermined> Policy-Name = VPN Authentication Authentication-Type = PAP EAP-Type = <undetermined> Reason-Code = 66 Reason = The user attempted to use an authentication method that is not enabled on the matching remote access policy. My assumption is that the ASA is using PAP authentication, instead of MS-CHAP v2; the credentials are confirmed, the proper Remote Access Policy is being used, but this policy is set to only allow MS-CHAP2. What do we need to do on the ASA to make it us MS-CHAP v2? In the ADSM GUI The "Microsoft CHAP v2 compatible" tickbox is enabled, but I don't know what this corresponds to in the config.

    Read the article

  • Postfix SASL Authentication using PAM_Python

    - by Christian Joudrey
    Cross-post from: http://stackoverflow.com/questions/4337995/postfix-sasl-authentication-using-pam-python Hey guys, I just set up a Postfix server in Ubuntu and I want to add SASL authentication using PAM_Python. I've compiled pam_python.so and made sure that it is in /lib/security. I've also added created the /etc/pam.d/smtp file and added: auth required pam_python.so test.py The test.py file has been placed in /lib/security and contains: # # Duplicates pam_permit.c # DEFAULT_USER = "nobody" def pam_sm_authenticate(pamh, flags, argv): try: user = pamh.get_user(None) except pamh.exception, e: return e.pam_result if user == None: pam.user = DEFAULT_USER return pamh.PAM_SUCCESS def pam_sm_setcred(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_acct_mgmt(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_open_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_close_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_chauthtok(pamh, flags, argv): return pamh.PAM_SUCCESS When I test the authentication using auth plain amltbXkAamltbXkAcmVhbC1zZWNyZXQ= I get the following response: 535 5.7.8 Error: authentication failed: no mechanism available In the postfix logs I have this: Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication problem: unknown password verifier Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication failure: Password verification failed Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: localhost.localdomain[127.0.0.1]: SASL plain authentication failed: no mechanism available Any ideas? tl;dr Anyone have step by step instructions on how to set up PAM_Python with Postfix? Christian

    Read the article

  • SSH Public Key - No supported authentication methods available (server sent public key)

    - by F21
    I have a 12.10 server setup in a virtual machine with its network set to bridged (essentially will be seen as a computer connected to my switch). I installed opensshd via apt-get and was able to connect to the server using putty with my username and password. I then set about trying to get it to use public/private key authentication. I did the following: Generated the keys using PuttyGen. Moved the public key to /etc/ssh/myusername/authorized_keys (I am using encrypted home directories). Set up sshd_config like so: PubkeyAuthentication yes AuthorizedKeysFile /etc/ssh/%u/authorized_keys StrictModes no PasswordAuthentication no UsePAM yes When I connect using putty or WinSCP, I get an error saying No supported authentication methods available (server sent public key). If I run sshd in debug mode, I see: PAM: initializing for "username" PAM: setting PAM_RHOST to "192.168.1.7" PAM: setting PAM_TTY to "ssh" userauth-request for user username service ssh-connection method publickey [preauth] attempt 1 failures 0 [preauth] test whether pkalg/pkblob are acceptable [preauth[ Checking blacklist file /usr/share/ssh/blacklist.RSA-1023 Checking blacklist file /etc/ssh/blacklist.RSA-1023 temporarily_use_uid: 1000/1000 (e=0/0) trying public key file /etc/ssh/username/authorized_keys fd4 clearing O_NONBLOCK restore_uid: 0/0 Failed publickey for username from 192.168.1.7 port 14343 ssh2 Received disconnect from 192.168.1.7: 14: No supported authentication methods available [preauth] do_cleanup [preauth] monitor_read_log: child log fd closed do_cleanup PAM: cleanup Why is this happening and how can I fix this?

    Read the article

  • mod_usertrack with X-Forwarded-For (proxy) IPs, apache 2.2

    - by ripper234
    I'm using apache 2.2 with mod_usertrack, behind a reverse proxy (load balancer). Now, the proxy disguises the client's real IP addresses (keeps them in the X-Forwarded-For header), and forwards the request along. mod_usertrack uses the clients' IP (along with some noise) to generate a GUID for each client. However, because of the proxy, it only sees a single IP and the generated GUIDs for each client are very similar (even with some possible collisions). I would like to upgrade apache to version 2.4, but it seems to be somewhat of a project. I did manage to compile it using this post and a few others, only to discover the folder structure does not resemble the one I had before (default ubuntu). I'm weary of tweaking it myself ... and I will be making my life miserable if I want to upgrade the server later on. So ... what are my options? Is there a good unofficial repository that packages apache 2.4 for Oneiric? (please provide a short 'how to', I'm not great in installing packages) Is there an alternative route to solve this? (Upgrading just the user_track module? Another module that works with apache 2.2?)

    Read the article

  • Why is IIS Anonymous authentication being used with administrative UNC drive access?

    - by Mark Lindell
    My account is local administrator on my machine. If I try to browse to a non-existent drive letter on my own box using a UNC path name: \mymachine\x$ my account would get locked out. I would also get the following warning (Event ID 100, Type “Warning”) 5 times under the “System” group in Event Viewer on my box: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: Logon failure: unknown user name or bad password. I would also get the following warning 3 times: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: The referenced account is currently locked out and may not be logged on to. On the domain controller, Event ID 680 of type “Failure Audit” would appear 4 times under the “Security” group in Event Viewer: Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: myaccount Followed by Event ID 644: User Account Locked Out: Target Account Name: myaccount Target Account ID: OURDOMAIN\myaccount Caller Machine Name: MYMACHINE Caller User Name: STAN$ Caller Domain: OURDOMAIN Caller Logon ID: (0x0,0x3E7) Followed by another 4 errors having Event ID 680. Strangely, every time I tried to browse to the UNC path I would be prompted for a user name and password, the above errors would be written to the log, and my account would be locked out. When I hit “Cancel” in response to the user name/password prompt, the following message box would display: Windows cannot find \mymachine\x$. Check the spelling and try again, or try searching for the item by clicking the Start button and then clicking Search. I checked with others in the group using XP and they only got the above message box when browsing to a “bad” drive letter on their box. No one else was prompted for a user name/password and then locked out. So, every time I tried to browse to the “bad” drive letter, behind the scenes XP was trying to login 8 times using bad credentials (or, at least a bad password as the login was correct), causing my account to get locked out on the 4th try. Interestingly, If I tried browsing to a “good” drive such as “c$” it would work fine. As a test, I tried logging on to my box as a different login and browsing the “bad” UNC path. Strangely, my “ourdomain\myaccount” account was getting locked out – not the one I was logged in as! I was totally confused as to why the credentials for the other login were being passed. After much Googling, I found a link referring to some IIS settings I was vaguely familiar with from the past but could not see how they would affect this issue. It was related to the IIS directory security setting “Anonymous access and authentication control” located under: Control Panel/Administrative Tools/Computer Management/Services and Applications/Internet Information Services/Web Sites/Default Web Site/Properties/Directory Security/Anonymous access and authentication control/Edit/Password I found no indication while scouring the Internet that this property was related to my UNC problem. But, I did notice that this property was set to my domain user name and password. And, my password did age recently but I had not reset the password accordingly for this property. Sure enough, keying in the new password corrected the problem. I was no longer prompted for a user name/password when browsing the UNC path and the account lock-outs ceased. Now, a couple of questions: Why would an IIS setting affect the browsing of a UNC path on a local box? Why had I not encountered this problem before? My password has aged several times and I’ve never encountered this problem. And, I can’t remember the last time I updated the “Anonymous access” IIS password it’s been so long. I’ve run the script after a password reset before and never had my account locked-out due to the UNC problem (the script accesses UNC paths as a normal part of its processing). Windows Update did install “Cumulative Security Update for Internet Explorer 7 for Windows XP (KB972260)” on my box on 7/29/2009. I wonder if this is responsible.

    Read the article

  • Socks proxy on mac's shared internet

    - by AliBZ
    Hi all I use my mac's internet sharing to create wireless network for my ipod touch. I have a linux server and I use socks proxy. I wanna use this proxy on my ipod but i don't know how. I put my shared network connection behind the proxy with localhost ip but my ipod isn't behind the proxy. any ideas?

    Read the article

  • How to add authentication to ssh dynamic port forwarding?

    - by Aalex Gabi
    I am using ssh as a SOCKS server by running this command on the server: ssh -f2qTnND *:1080 root@localhost There is one problem: anybody can connect to the server and use it's internet connection. Options: To use iptables to filter access to the server, but I connect to the server from various non-statically allocated IP addresses so I would have to edit very frequently those filters which can be annoying. To install a SOCKS server on the remote. Ultimately this is the last option if there is no other simpler way to do it. (I am very lazy) Launching the same command on clients machines. The problem here is that some clients don't run on Linux and it is awkward to set up the tunnel (Windows + Putty). Is there a way to add authentication to a SOCKS server made using ssh? Bonus question: How to add encryption between the client and the server (made using ssh)?

    Read the article

  • Why does using nginx as a reverse proxy break local links?

    - by tsvallender
    I've just set up nginx as a reverse proxy, so some sites served from the box are served directly by it and others are forwarded to a Node.js server. The site being served by Node.js, however, is displayed with no CSS or images, so I assume the links are somehow being broken, but don't know why. The following is the only file in /etc/nginx/sites-enabled: server { listen 80; ## listen for ipv4 listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name dev.my.site; access_log /var/log/nginx/localhost.access.log; location / { root /var/www; index index.html index.htm; } location /myNodeSite { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; } } I had thought perhaps it was trying to find them in /var/www due to the first entry, but removing that doesn't seem to help.

    Read the article

  • How do you cache web pages with a personalized header using caching reverse proxy such as Squid, Var

    - by Continuation
    Pretty much every page of my website is dynamically generated. However they don't change that frequently (kinda similar to a forum page). So I'd like to cache them using a caching reverse proxy such as Squid, varnish or Nginx. The problem is that for my logged-in users, each of them will see a personalized header saying "Welcome John Doe. Logout" on the upper right corner of the page (just like serverfault). While users who aren't logged in will see a header that says "Login" instead. So basically even though every user will see the same page in general, they all slightly different version due to that personalized header. Is there any way so that I can cache the "main" part of the page and serve it from cache while generate the personalized header dynamically for each individual user? This must be a very common problem. How is it solved in general?

    Read the article

  • How to use iptables to foward outbound web traffic to a proxy?

    - by jnman
    I've been hitting my head for a while as to how to do this. The scenario is as follows: I want to be able to forward all outbound web traffic from a browswer to Tor so that it is properly anonymized. Normally, one could just set the http proxy in the browser and be done with it but this is with a browser without the ability to do so specifically, a mobile browser. So ideally, what could be done then is to intercept all web/dns traffic requests from the browser and send it to Tor. Assume for this, that Tor will be running on the device too.

    Read the article

  • Is there a command line two-factor authentication verification code generator?

    - by dan
    I manage a server with two-factor authentication. I have to use the Google Authenticator iPhone app to get the 6-digit verification code to enter after entering the normal server password. The setup is described here: http://www.mnxsolutions.com/security/two-factor-ssh-with-google-authenticator.html I would like a way to get the verification code using just my laptop and not from my iphone. There must be a way to seed a command line app that generates these verification codes and gives you the code for the current 30-second window. Is there a program that can do this?

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • How do I configure NTLM authentication in Firefox on Linux?

    - by tolomea
    Our IT department have NTLM deployed through the intranet servers. I've set network.automatic-ntlm-auth.trusted-uris value in Firefox on some of the Windows machines and that works fine. However setting it in Firefox on the Linux machines is not working. This doesn't surprise me at all, I've no notion of where Firefox on Linux is supposed to get the authentication details from. So how is this process supposed to work? what bits of config / infrastructure am I missing?

    Read the article

  • How to rewrite the domain part of Set-Cookie in a nginx reverse proxy?

    - by Tobia
    I have a simple nginx reverse proxy: server { server_name external.domain.com; location / { proxy_pass http://backend.int/; } } The problem is that Set-Cookie response headers contain ;Domain=backend.int, because the backend does not know it is being reverse proxied. How can I make nginx rewrite the content of the Set-Cookie response headers, replacing ;Domain=backend.int with ;Domain=external.domain.com? Passing the Host header unchanged is not an option in this case. Apache httpd has had this feature for a while, see ProxyPassReverseCookieDomain, but I cannot seem to find a way to do the same in nginx.

    Read the article

  • DNS Spoofing and Xampp as a proxy, how to configure it?

    - by Angelo
    I have a server running Apache with mod_proxy, a module to use my localhost as a proxy server. When somebody on the same LAN visits my server (my localhost through my lan ip), he/she can see only the .html page loaded into my server. Due to DNS Spoofing restrictions on the client, if he/she clicks on a link that refers to something not on my server, Apache says correctly "Object not found", because the client cannot request the page from the Internet (remember, the DNS is spoofed to my localhost). The question is: how to configure Apache to grab the page in place of the client?

    Read the article

  • Can I proxy my no-ip domain using a .htaccess file on my hosted domain?

    - by Dean
    I have a domain http://www.example.com which has a hosting package and website on it. I also have a http://example.no-ip.org domain which contains some content I would like to appear under the same domain. Can I setup a .htaccess file at http://www.example.com/proxy/ which proxies the files at http://www.example.no-ip.org/files/ Similarly, could I host an entire domain in the same way?, e.g. http://www.example2.com/ proxying http://example.no-ip.org/files2/ Alternatively, if someone were to say "That's stupid, use this free (or super-cheap) dynamic DNS host:" I would probably accept that answer.

    Read the article

  • How to configure Apache to act as an SSL proxy to an application server?

    - by ripper234
    I have one physical server that runs: an Apache (httpd) server another web server (let's say Tomcat for sake of argument) on port 1234 Can I configure the Apache server to act as a proxy for SSL traffic, while keeping the application server blissfully unaware of SSL? What I imagine is: Traffic to http://myserevr.com/app is redirected to https://myserver.com/app Traffic to https://myserver.com/app is proxied to the application server. My SSL certificate is only installed on the Apache server, not on the Application server Other traffic to the Apache server (http://myserver.com/anotherapp) is served directly from the Apache server What's the best setup to achieve this? (On Ubuntu, if that matters)

    Read the article

  • How to redirect a specific url through a proxy for multiple services?

    - by CrystalFire
    I have a website hosted on 000webhost.com for free. I am unable to connect directly to the site because Comcast has blocked a portion of 000webhost's servers for free accounts due to other people hosting malicious content. In order to maintain my website, I cannot use my computer to directly connect to the server. I am wondering if there is a way by which I can specifically forward attempts to access the server through a proxy, transparently. The current system that I am on is Windows, but I also have systems running Mac OSX and Linux, so solutions for any system could be fine. I've found answers which work for http, but I'm looking for a solution which will let me use all the other functions as well, such as ftp and ssh.

    Read the article

  • Ubuntu One Windows Client 3.0.1 - Sync not connecting

    - by Tweezak
    I've got sync working at home on Natty and I need to access files at work. I've just installed client 3.0.1 at work on Windows 7. It's finding my account and sees what devices are already set up for sync but it just repeatedly tries to sync (File sync starting...) and after a while fails (File Sync is disconnected.). This cycle loops endlessly. I'm suspicious that it's a problem with my proxy setup. My employer doesn't use a manual proxy configuration but instead uses an automatic configuration script at a specific http address. Does U1 recognize that kind of setup or is it only looking for a proxy server in the form of an address/port?

    Read the article

  • Reverse proxying only a specific URL

    - by Bart Silverstrim
    I have a web server at www.ourcompany.com running Apache2. Using the proxy modules, I am able to (for example) get 172.16.0.5, an internal IP device, to be accessed on www.ourcompany.com/device. The trouble is that anyone can play with or explore the device using strings sent to www.ourcompany.com/device/change/settings/here.html. I'd like the reverse proxy to only work for a specific URL; www.ourcompany.com/device/you/must/use/this while anything else will be rejected if requested. Is there a setting that can be used to do this, or is it a simple rewrite condition placed in the virtualhost for the site under sites-enabled? What is the simplest, most maintainable way to sanitize requests to the internal device through the reverse proxy? Running Apache2 on Ubuntu.

    Read the article

  • After 12.10, no GUI can authenticate for administration, such as Software Center

    - by Mike Crowe
    I upgraded to 12.10, and now I can't use Software Center or Network Administration. It keeps giving me an authentication errors. When I check /var/log/auth.log, I see: Oct 31 21:10:45 mike polkit-agent-helper-1[4665]: pam_unix(polkit-1:auth): authentication failure; logname= uid=1000 euid=0 tty= ruser=root rhost= user=root Oct 31 21:10:51 mike polkit-agent-helper-1[4667]: pam_unix(polkit-1:auth): authentication failure; logname= uid=1000 euid=0 tty= ruser=root rhost= user=root I found a post which said to change my password, which I've done (both through the Ubuntu settings GUI tool, and through recovery mode. No avail. Any suggestions? TIA M

    Read the article

  • Combining a content management system with ASP.NET

    - by Ek0nomik
    I am going to be creating a site that seems like it requires a blend of a content management system (CMS) and some custom web development (which is done in ASP.NET MVC). I have plenty of web development experience to understand the ASP.NET MVC side of the fence, but, I don't have a lot of CMS knowledge aside from getting one stood up. Right now my biggest question is around integrating security from ASP.NET with the CMS. I currently have an ASP.NET MVC site that handles the authentication for multiple production sites and creates an authentication cookie under our domain (*.example.com). The page acts like a single sign on page since the cookie is a wildcard and can be used in any other applications of the same domain. I'd really like to avoid having users put in their credentials twice. Is there a CMS that will play well with the ASP.NET Forms Authentication given how I have these existing applications structured? As an aside, right now I am leaning towards Drupal, but, that isn't finalized.

    Read the article

  • php user authentication libraries / frameworks ... what are the options?

    - by es11
    I am using PHP and the codeigniter framework for a project I am working on, and require a user login/authentication system. For now I'd rather not use SSL (might be overkill and the fact that I am using shared hosting discourages this). I have considered using openID but decided that since my target audience is generally not technical, it might scare users away (not to mention that it requires mirroring of login information etc.). I know that I could write a hash based authentication (such as sha1) since there is no sensitive data being passed (I'd compare the level of sensitivity to that of stackoverflow). That being said, before making a custom solution, it would be nice to know if there are any good libraries or packages out there that you have used to provide semi-secure authentication? I am new to codeigniter, but something that integrates well with it would be preferable. Any ideas? (i'm open to criticism on my approach and open to suggestions as to why I might be crazy not to just use ssl). Thanks in advance. Update: I've looked into some of the suggestions. I am curious to try out zend-auth since it seems well supported and well built. Does anyone have experience with using zend-auth in codeigniter (is it too bulky?) and do you have a good reference on integrating it with CI? I do not need any complex authentication schemes..just a simple login/logout/password-management authorization system. Also, dx_auth seems interesting as well, however I am worried that it is too buggy. Has anybody else had success with this? I realized that I would also like to manage guest users (i.e. users that do not login/register) in a similar way to stackoverflow..so any suggestions that have this functionality would be great

    Read the article

  • Forms authentication: how do you store username password in web.config?

    - by Nick G
    I'm used to using Forms Authentication with a database, but I'm writing a little internal utility and the app doesn't have a database so I want to store the username and password in web.config. However for some reason, forms authentication is still trying to access SQL Server and I can't see how to stop it doing this and pick up the credentials from web.config. What am I doing wrong? I just get the error "Failed to generate a user instance of SQL Server due to a failure in impersonating the client. The connection will be closed." Here are the relevant sections of my web.config: <configuration> <system.web> <authentication mode="Forms"> <forms loginUrl="~/Login.aspx" timeout="60" name=".LoginCookie" path="/" > <credentials passwordFormat="Clear"> <user name="user1" password="[pass]" /> <user name="user2" password="[pass]" /> </credentials> </forms> </authentication> <authorization> <deny users="?" /> </authorization> </system.web> </configuration>

    Read the article

  • Authentication and Security in my website - need advice please.

    - by Ichirichi
    Hi, I am using database with a list of username/passwords, and a simple web form that allows for users to enter their username/password. When they submit the page, I simply do a stored procedure check to authenticate. If they are authorised, then their user details (e.g. username, dob, address, company address, other important info) are stored in a custom User object and then in a session. This custom User object that I created is used throughout the web application, and also in a sub-site (session sharing). My question/problems are: Is my method of authentication the correct way to do things? I find users complaining that their session have expired although they "were not idle", possibly due the app pool recycling? They type large amounts of text and find that their session had expired and thus lose all the text typed in. I am uncertain whether the session does really reset sporadically but will Forms Authentication using cookies/cookiless resolve the issue? Alternatively should I build and store the User Object in a session, cookie or something else instead in order to be more "correct" and avoid cases like in point #2. If I go down the Forms Authentication route, I believe I cannot store my custom User object in a Forms Authentication cookie so does it mean I would store the UserID and then recreate the user object on every page? Would this not be a huge increase on the server load? Advice and answers much appreciated. L

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >