Search Results

Search found 34094 results on 1364 pages for 'open authentication'.

Page 581/1364 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • Apache mod_proxy with SSL not redirecting

    - by simonszu
    I have a custom server running behind an apache reverse proxy. Since the custom server can only handle HTTP traffic, i am trying to use apache for wrapping proper SSL around it, and for some kind of HTTP authentication. So i enabled mod_proxy and mod_ssl and modified sites-available/default-ssl. The config is as following: <Location /server> order deny,allow allow from all AuthType Basic AuthName "Please log in" AuthUserFile /etc/apache2/htpasswd Require valid-user ProxyPass http://192.168.1.102:8181/server ProxyPassReverse http://192.168.1.102:8181/server </Location> The custom server is accessible from the internal network via the location specified in the ProxyPass directive. However, when the proxy is accessed from the outside, it presents the login prompt, and after successfully authenticated, i get a blank page with the words The resource can be found at http://192.168.1.102:8181/server. When i type the external URL again in an already authenticated browser instance, i am properly redirected to the server frontend. The access.log is full of entrys stating that my browser does successful GET requests, and the proxy is happily serving the /server ressource. However, the ressource isn't containing the server's frontend, but this blank page with these words on it.

    Read the article

  • Bluehost Emails Getting Blocked

    - by colithium
    A site for my client has the run-of-the-mill "website with users" email pattern. Create an account, get an activation email. Get an email when a subscription is expiring, etc. The site is hosted on Bluehost and currently it uses php's mail() function. There isn't much configuration that is allowed (as far as I know). The trouble is, about a third of these emails disappear into the void. They aren't in spam or junk folders, there's no bounce message, they just cease to exist. I've read about Bluehost email troubles but I can't figure out what my options are for fixing it. These aren't marketing emails, ie they have user-specific information contained within them. I suppose if a solution offers a good templating system that would be fine. What are my options? Excerpt of headers when delivered to a Gmail address: Received-SPF: neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) client-ip=00.000.000.000; DomainKey-Status: good Authentication-Results: mx.google.com; spf=neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) smtp.mail=domain@box###.bluehost.com; domainkeys=pass [email protected]

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • How to give wife emergency access to logins, passwords, etc.?

    - by Torben Gundtofte-Bruun
    I'm the digital guru in my household. My wife is good with email and forum websites but she trusts me with all our important digital stuff -- such as online banking and other things that require passwords, but also family photos and the plethora of other digital things in a modern home. We discuss relevant actions but it's always me that executes the actions. If I should get "hit by a bus" then my wife would be thoroughly stranded -- she would have no idea what digital stuff is where on our computer, how to access it, what online accounts we have, and their login credentials are. It would also leave my many public appearances (personal websites, email accounts, social networks, etc.) unresolved. To complicate things, I'm one of those people who don't use password as my password everywhere; I use a mix of SuperGenPass and LastPass, and also two-factor authentication whenever possible. I don't have much hope that she would find her way through a written explanation of all that in a stressful situation. I could just tell her that she should ask my tech-savvy twin brother and then entrust him with my LastPass master passphrase. I feel that would have a high chance of success, but it's inelegant and leaves my wife without control of the information. How can I ensure that my wife has access to my digital remains?

    Read the article

  • SBS 2011 Essentials and too many new Mac users

    - by Harry Muscle
    We currently have about 15 users on a Windows SBS 2011 Essentials Server. I've just been informed that we plan to bring aboard about 15 more users that will be using Macs. We'll be using a Mac Server to manage the 15 new Macs, however, I'm looking for advice on how to best set this all up. Ideally I would just add the 15 new Mac users to Active Directory and setup the Mac Server to authenticate against AD, unfortunately the SBS 2011 Essentials Server has a limit of 25 users, so adding these new users to AD won't work unless we upgrade the Windows server (which I'd rather avoid since it's a lot of work and a lot of money). That leaves the option of creating user accounts for these 15 Mac users on the Mac Server only. The problem that this creates though is how do I share files been Mac users and Windows users since they are now using different systems for network authentication. Any advice (short of upgrade to SBS Standard) is highly appreciated. Thanks, Harry P.S. We don't run Exchange or anything else on our server ... it's mainly used for file sharing and enforcing security via group policies.

    Read the article

  • OpenBSD logins via SSH seem to be ignoring my configured radius server

    - by Steve Kemp
    I've installed and configured a radius server upon my localhost - it is delegating auth to a remote LDAP server. Initially things look good: I can test via the console: # export user=skemp # export pass=xxx # radtest $user $pass localhost 1812 $secret Sending Access-Request of id 185 to 127.0.0.1 port 1812 User-Name = "skemp" User-Password = "xxx" NAS-IP-Address = 192.168.1.168 NAS-Port = 1812 rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=185, Similarly I can use the login tool to do the same thing: bash-4.0# /usr/libexec/auth/login_radius -d -s login $user radius Password: $pass authorize However remote logins via SSH are failing, and so are invokations of "login" started by root. Looking at /var/log/radiusd.log I see no actual log of success/failure which I do see when using either of the previous tools. Instead sshd is just logging: sshd[23938]: Failed publickey for skemp from 192.168.1.9 sshd[23938]: Failed keyboard-interactive for skemp from 192.168.1.9 port 36259 ssh2 sshd[23938]: Failed password for skemp from 192.168.1.9 port 36259 ssh2 In /etc/login.conf I have this: # Default allowed authentication styles auth-defaults:auth=radius: ... radius:\ :auth=radius:\ :radius-server=localhost:\ :radius-port=1812:\ :radius-timeout=1:\ :radius-retries=5:

    Read the article

  • Strange Apache Webdav situation (OSX Will connect, Ubuntu will not)

    - by mewrei
    So basically my situation is that I have an Apache 2.2 webserver running on Linux on another box, and I have it configured to serve up webdav. Now here's the weird part, I can access the server just fine on my Mac using the "Connect to Server" dialog (even moved like 5GB of files over the connection). On my Ubuntu desktop cadaver will connect as well and allow me to browse. However when I try to use Xmarks (BYOS Edition) or the GNOME "Connect to Server" dialog, it gives me a 403 Forbidden error. My server does digest authentication if that makes any difference. Here's part of my apache2.conf file <VirtualHost *:80> DocumentRoot "/path" <Directory "/path"> Dav on AuthType Digest AuthName iTools AuthDigestDomain "/" AuthUserFile /path/to/WebDavUsers Options None AllowOverride None <LimitExcept GET HEAD OPTIONS> require valid-user </LimitExcept> Order allow,deny Allow from All </Directory> <Directory "/path/*/Public"> Options +Indexes </Directory> <Directory "/path/user"> <LimitExcept GET HEAD OPTIONS> require user user </LimitExcept> </Directory> </VirtualHost>

    Read the article

  • OS X Keeps prompting me for SSH private key passphrase (OS X 10.6.8)

    - by Danny Englander
    I have a private key to ssh into my server and the connection works. In my hosts file I have: Host myhost HostName xxx.xxx.xxx.xx GlobalKnownHostsFile ~/.ssh/known_hosts port 22 User myuser IdentityFile ~/.ssh/mykey_dsa IdentitiesOnly yes .. and then I type ssh myhost Every time I connect, I get the Mac OS X keychain prompt and I tell OS X to remember the passphrase but then when I disconnect from ssh and re-connect, I am prompted to add the passphrase to the keychain again. This is only a recent problem so I suspect and issue with Keychain? To be clear, I can 're-add' to keychain every time and connect but this defats the purpose. The permissions on my dsa key are set at 600 or -rw-------@ I tried repairing disk permissions but that did no good. My Google-foo is also failing me, nothing of use came up. So I am not sure if this an OS X / keychain issue or an SSH issue. update: When I try ssh -vvv myhost, I think it reveals the issue: debug1: Trying private key: /Users/danny/.ssh/mykey_dsa debug1: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> debug3: Not a RSA1 key file /Users/danny/.ssh/mykey_dsa. debug1: read PEM private key done: type DSA Identity added: /Users/danny/.ssh/mykey_dsa (/Users/danny/.ssh/mykey_dsa) debug1: read PEM private key done: type DSA debug3: sign_and_send_pubkey debug2: we sent a publickey packet, wait for reply debug1: Authentication succeeded (publickey). ... and after that I get connected. I think this crux of the matter is: PEM_read_PrivateKey failed

    Read the article

  • Where is the TFS database?

    - by Blanthor
    I've been using TFS 2010 with no problems. I tried adding a user and I got the following error message. "TF30063: You are not authorized to access <serverName>\DefaultCollection. -The remote server returned an error: (401) Unauthorized." I remoted into the server, <serverName>, and opened the TFS Console. The logs mentioned a connection string: ConnectionString: Data Source=<serverName>\SS2008;Initial Catalog=Tfs_DefaultCollection;Integrated Security=True While remoted in I open SQL Server 2008 Management Studio opening the (local) server with Windows Authentication. It shows the connection to be (local)(SQL Server 9.04.03 - <serverName>\Admin), and there is no Tfs_DefaultCollection database. Can someone tell me what is going on? Was I wrong in connecting to this instance of the database (i.e. Is the log file the wrong place to find the connection string)? Is the database so corrupted that SQL Manager Studio cannot see it anymore, although TFS could? Should I be logging into Management Studio as user SS2008? btw I don't know of any such credentials.

    Read the article

  • Some clients cannot connect to Server 2008 R2 VPN

    - by Robl
    Hi all, Have a server 2008 r2 setup as a VPN server. We have created a windows group to control access to the VPN called vpn-users. Clients are all Windows 7 Pro. This all seems to work fine except some users cannot connect to the VPN! For example I try to logon to the VPN from a client and get an error saying the server refused the connect due to a policy in place. Specifically authentication type! Fine I think. So i drop that user into the vpn-users group created for this and try again and hey presto the user can now logon! Great. Now try this with another user. But this time I get the same error even though I have dropped them into the vpn-users group!! So does anyone have any idea why this works for some users and not for others?? I have tried moving the user from certain OU's in AD to others, copying the account, taking the user out of the vpn-users group and then back in but get the same error each time. Any thoughts anyone?

    Read the article

  • Referencing groups/classes from Puppet dashboard in my site manifest

    - by Banjer
    I'm using Puppet Dashboard as my ENC and I'm not sure how to reference or use class and group classifications from /etc/puppet/manifests/site.pp. I have two groups defined in the dashboard: CentOS6 and SLES11. What should my site.pp look like if I want to include a certain list of modules in the CentOS6 group and a certain list of modules in the SLES11 group? I'm trying to do something like this: # /etc/puppet/manifests/site.pp node basenode { include hosts include ssh::server include ssh::client include authentication include sudo include syslog include mail } node 'CentOS6' inherits basenode { include profile } node 'SLES11' inherits basenode { include usrmounts } I have OS-specific case statements within my modules, but there are some modules that will only be applied to a certain distro. So I suppose I have two questions: Is this the best way to apply modules/resources in an OS-specific manner? Or does the above make you want to vomit? Regardless of #1, I'm still curious as how to reference classes, groups, and nodes from Dashboard within my manifests. I've read the External Nodes doc, but I'm not seeing how they correspond to manifests. Thanks all.

    Read the article

  • How to get a new-pssession in PowerShell to talk to my ICS-connected laptop for Remoting

    - by Scott Bilas
    If I have my laptop on the LAN, then Powershell remoting works fine from my workstation to the laptop. However, the LAN is wireless, and so sometimes I will connect on a wire to my workstation. It has two ethernet ports so I have the secondary wired up to share to the laptop using Win7's Internet Connection Sharing. (Btw I know that avoiding ICS would solve the problem, but that's not an option right now.) So my question is: what magic registry bits or command line options do I need to flip to get remoting to work to my laptop through ICS? Here's what happens when I try it: new-pssession -computername 192.168.137.161 [192.168.137.161] Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information on how to set TrustedHosts run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed I'm having a hard time understanding the documentation for PowerShell and WinRM. I've tried messing with allowing ports in the firewall and setting TrustedHosts to * on my workstation (don't think this is a good idea on the laptop). I have no idea where to go from here, would appreciate any help.

    Read the article

  • Is Subversion(SVN) supported on Ubuntu 10.04 LTS 32bit?

    - by Chad
    I've setup subversion on Ubuntu 10.04, but can't get authentication to work. I believe all my config files are setup correctly, However I keep getting prompted for credentials on a SVN CHECKOUT. Like there is an issue with apache2 talking to svnserve. If I allow anonymous access checkout works fine. Does anybody know if there is a known issue with subversion and 10.04 or see a error in my configuration? below is my configuration: # fresh install of Ubuntu 10.04 LTS 32bit sudo apt-get install apache2 apache2-utils -y sudo apt-get install subversion libapache2-svn subversion-tools -y sudo mkdir /svn sudo svnadmin create /svn/DataTeam sudo svnadmin create /svn/ReportingTeam #Setup the svn config file sudo vi /etc/apache2/mods-available/dav_svn.conf #replace file with the following. <Location /svn> DAV svn SVNParentPath /svn/ AuthType Basic AuthName "Subversion Server" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user AuthzSVNAccessFile /etc/apache2/svn_acl </Location> sudo touch /etc/apache2/svn_acl #replace file with the following. [groups] dba_group = tom, jerry report_group = tom [DataTeam:/] @dba_group = rw [ReportingTeam:/] @report_group = rw #Start/Stop subversion automatically sudo /etc/init.d/apache2 restart cd /etc/init.d/ sudo touch subversion sudo cat 'svnserve -d -r /svn' > svnserve sudo cat '/etc/init.d/apache2 restart' >> svnserve sudo chmod +x svnserve sudo update-rc.d svnserve defaults #Add svn users sudo htpasswd -cpb /etc/apache2/dav_svn.passwd tom tom sudo htpasswd -pb /etc/apache2/dav_svn.passwd jerry jerry #Test by performing a checkout sudo svnserve -d -r /svn sudo /etc/init.d/apache2 restart svn checkout http://127.0.0.1/svn/DataTeam /tmp/DataTeam

    Read the article

  • SQL 2008 - db mail issue

    - by Chris
    Hello. I have two instances of SQL Server 2008. One was upgraded from SQL Server 2000 and one was a clean, new install. The instances are running on different nodes of the same cluster, although I have tried having them both on the same node with identical results. SQL Mail operates perfectly on both instances. DB Mail operates perfectly on the newly installed instance. On the upgraded instance, DB Mail does not send any mail. Of course, I am not positive that the fact this instance is upgraded has anything to do with the issue, but it might. The configuration of my db mail profile and account looks identical to my functioning instance. In the configuration of the 'alerts' tab in the SQL Agent properties i have tried selecting both DB Mail and SQL Mail to no avail. Both instances use the same SMTP server with the same authentication (domain with db engine account). All messages sent via sp_send_db mail and those sent via the 'test email' option are visible in the sysmail_allitems queue and remain there as 'unsent'. The send_status eventually changes to 'failed'. The only messages in the sysmail_event_log are 'mail queue stopped by login domain\myuser', 'mail queue started by login domain/myuser' and 'activiation successful.'. selecting from the externalmailqueue has the same number of rows as sysmail_allitems. i have tried bouncing the agent, the entire instance and moving the other functioning instance to the other node in the cluster. any thoughts? thx.

    Read the article

  • Adding new SPNs to existing service ids

    - by jmh
    We have a tomcat server using spring-security kerberos to authenticate users to the webpage against active directory. There are around 25 domain controllers. The site has two CNAME based DNS aliases. The site currently has one Service ID with SPNs registered for the DNS A record as well as each of the CNAMEs. While everything is working right now, I don't know how to reliably change this configuration without possible downtime. The reason is that clients cache kerberos tickets: http://www.juniper.net/techpubs/en_US/uac4.2/topics/concept/user-role-active-directory-about.html The 'kerbtray.exe' program is helpful for viewing and deleting Kerberos tickets on the endpoint. Old tickets must be purged from the endpoint if SPNs are updated or passwords are changed (assuming the endpoint still has a cached copy of the ticket from a prior SPNEGO request to the MAG Series device. During testing, you should purge tickets before each authentication request. Description of "klist" program used to inspect/delete cached tickets: http://technet.microsoft.com/en-us/library/hh134826.aspx So if each of the clients (users running windows) who connect to my web server have kerberos tickets that become invalid as soon as I update the SPNs or passwords, how do I ensure changes are seamless? Are there any operations that can be done safely? I can't just ask all of the users to install klist and delete their old tickets.

    Read the article

  • Hyper-V Virtual Machine won't respond over network

    - by Brad Gignac
    Recently, one of our Hyper-V virtual machines has periodically stopped responding over the network. It seems to be happening every few days, and it occasionally happens up to several times a day. I am by no means a sysadmin, so any direction you guys could provide would be very welcome. I've included everything I know to include below. If you need any additional information, I'll be glad to include it. I can connect through the Hyper-V console. I can't connect to network shares, IIS web apps, using RDP, or using ping. Memory usage seems to be normal (3 of 4 GB) Processor usage seems low. We don't know the exact time the server goes down, but the following error appears consistently around the time it goes down: Error 5719, NETLOGON This computer was not able to set up as secure session with a domain controller in domain *** due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If this problem persists, please contact your domain administrator.

    Read the article

  • What command should be used to connect via SSH through squid proxy?

    - by Raul Cardoso
    I have set up a Squid HTTP Proxy (in centOS) and intended to use it also for ssh connections. Managed to configure putty (in a windows client) to use this proxy while connecting by ssh. Confirmed in the "target host" that the ssh connection was coming from Proxy server ip instead of the windows client IP. Used: targethost: 22 for ssh proxyserv: 3128 for proxy (along with proxy credentials) I'm now having problems connecting to the "target host" using Ubuntu and the same proxy server. I have tried the following: me@mycomp:~$ connect-proxy -H test@proxyserv:3128 targethost 22 Enter proxy authentication password for test@proxyserv: SSH-2.0-OpenSSH_6.2p2 Ubuntu-6 It hangs in last line, expecting some input. All attempts resulted in a "Protocol mismatch." error. Putty successfully connects to the http proxy and sends credentials, showing me ssh login right away. - How to do (with commands in Ubuntu) the same putty does? - Is there any other way than connect-proxy command to do this? Edit: Also tried the following with same result ("Protocol mismatch") me@mycomp:~$ connect-proxy -H test@proxyserv:3128 targethost 22 ssh -l myshel_login Thanks in advance Edit: Solution details (thanks to NickW pointing the right way) installed corkscrew and added to ssh_config Host targethost ProxyCommand corkscrew proxyserv 3128 %h %p /etc/ssh/proxypass created proxypass file login:password Restarted ssh and used a simple ssh command ssh mylogin@targethost (ssh password was asked as usual)

    Read the article

  • Choose identity from ssh-agent by file name

    - by leoluk
    Problem: I have some 20-30 ssh-agent identities. Most servers refuse authentication with Too many failed authentications, as SSH usually won't let me try 20 different keys to log in. At the moment, I am specifying the identity file for every host manually, using the IdentityFile and the IdentitiesOnly directive, so that SSH will only try one key file, which works. Unfortunately, this stops working as soon as the original keys aren't available anymore. ssh-add -l shows me the correct paths for every key file, and they match with the paths in .ssh/config, but it doesn't work. Apparently, SSH selects the indentity by public key signature and not by file name, which means that the original files have to be available so that SSH can extract the public key. There are two problems with this: it stops working as soon as I unplug the flash drive holding the keys it renders agent forwarding useless as the key files aren't available on the remote host Of course, I could extract the public keys from my identity files and store them on my computer, and on every remote computer I usually log into. This doesn't looks like a desirable solution, though. What I need is a possibility to select an identity from ssh-agent by file name, so that I can easily select the right key using .ssh/config or by passing -i /path/to/original/key, even on a remote host I SSH'd into. It would be even better if I could "nickname" the keys so that I don't even have to specify the full path.

    Read the article

  • IIS6: How to troubleshoot a 404 error in an ASP.NET application?

    - by Tomalak
    I have an ASP.NET application on a Windows Server 2003/IIS6 that refuses to run for some reason (it's the Xerox Centre, if that info helps). It has been working flawlessly before though on this server. Now, all I get if I try to open the app homepage (http://some.intranet.server/XeroxCentreWareWeb/) is a "404 - File or directory not found" error. The app is configured to run in it's own app pool, which runs as Network Service. The Network Service account has read access to the configured directory. If I stop the app pool, I get the expected "Service Unavailable" message, meaning the app and its pool are wired correctly I tried to track down any file permission issues with procmon - nothing to be seen. There isn't even an access to the web app directory happening when the page loads. Interestingly, according to procmon, the web server accesses the 401-2 custom error file (Logon failed due to server configuration) first, but then decides to send the 404 down to the client. EDIT: The app runs with Windows-integrated authentication. Regular users have access to the app directory as well (I would have noticed file system "ACCESS DENIED" messages in procmon, if there had been any.) This makes me think that there is some kind of weird permission problem that occurs even before the application files are being accessed. I just have no idea where to look. I've tried to run the app pool as Local System for a test, but to no avail. What else could I check in this case?

    Read the article

  • how to authenticate once for multiple servers, using only apache configs?

    - by Wang
    My problem is, I have a number of prepackaged web apps (a print system, a wiki, a bug tracker, an email archive, etc.) running on different Mac OS X Leopard (soon to be SL) servers that each need to authenticate users from the internet at large. Right now every server presents an Apache basic authentication prompt, which takes a shared login, but it's apparently enough of an inconvenience to log in repeatedly that people are sending email without checking the wiki or bug tracker or archive. In the case of the bug tracker, a user [might need to log in twice---once for apache if he hasn't used any other protected service on that server, once for the bug tracker itself so it can distinguish different people. Since the only common component to all these apps is Apache 2 itself, does it have any way of authenticating a user once, in some way that will be respected by other servers and various web apps? Looked at http://serverfault.com/questions/32421/how-is-session-stickiness-achieved-across-multiple-web-servers but it sounds like the answer is assuming that I get to write my own web app. Looked at Ian Bicking's blog but it's four years old and recommends something available only for apache 1.3, not apache 2. Sorry not to hyperlink the second site---apparently I need 10 reputation points. Edit: Shibboleth does what I need, but I should have specified that I'm looking for a really dumb, really simple solution for in-house services that need to handle all of a dozen users, probably not more than three at a time.

    Read the article

  • IIS 6 getting "Page Not Found" after applying SSL

    - by Dominic Zukiewicz
    I am setting up SSL certificates on a development environment using IIS 6 on W2k3. I have a directory called login with a single page login.asp which I would like only viewable over SSL. So before installing or applying SSL permissions, the page is viewable through a browser. I can browse the page and it redirects etc. and all is good. However Basic Authentication is Base64 encoded so I want to secure the traffic from this page only. I have created a dummy certificate in makecert, installed it and added it to IIS. IIS is happy that it is trusted. I have selected the directory of login and child files to "Require SSL channel". When I refresh my browser on login/login.asp I get a "404: Page Not Found" in IE 8. So 2 issues here The page is now unviewable when using HTTPS. They must manually type the HTTPS (minor inconvenience for now) If I turn off "Require SSL Channel" from IIS, it works again. What part of the process am I missing as I have followed several tutorials on installed SSL certificates, but still come across this barrier.

    Read the article

  • SFTP ChRoot result in broken pipe

    - by Patrick Pruneau
    I have a website that I want to add some restricted access to a sub-folder. For this, I've decided to use CHROOT with SFTP (I mostly followed this link : http://shapeshed.com/chroot_sftp_users_on_ubuntu_intrepid/) For now, I've created a user (sio2104) and a group (magento).After following the guide, my folder list look like this : -rw-r--r-- 1 root root 27 2012-02-01 14:23 index.html -rw-r--r-- 1 root root 21 2012-02-01 14:24 info.php drwx------ 15 root root 4096 2012-02-25 00:31 magento As you can see, i've chown root:root the folder magento I wanted to jail-in the user and ...everything else by the way. Also in the magento folder, I chown sio2104:magento everything so they can access what they want. Finally, I've added this to sshd_config file : #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group magento ChrootDirectory /usr/share/nginx/www/magento ForceCommand internal-sftp AllowTCPForwarding no X11Forwarding no PasswordAuthentication yes #UsePAM yes And the result is...well, I can enter my login, password and it's all finished with a "broken pipe" error. $ sftp [email protected] [....some debug....] [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 10.20.0.50 ([10.20.0.50]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Write failed: Broken pipe Connection closed Verbose mode gives nothing to help. Anyone have an idea of what I've done wrong? If I try to login with ssh or sftp with my personnal user, everything works fine.

    Read the article

  • Securely executing system commands as sudo from PHP

    - by Aydin Hassan
    Is it possible? I have written a command line tool in PHP for creating new environments for our company. It creates system users, directories, databases, VHosts and restarts apache, amongst other things. These commands require sudo privileges. I thought it might be a nice idea to have a web-interface for it, to make it easier for other non-developers to use. The web app would be behind authentication. When running from the command line I just run sudo tool.php, obviously I can't do this from a web app. How could I do this securely? Giving the apache user sudo access seems silly, as this would means all sites hosted on the box (eg all our environments) would have sudo access. Is it possible to make this tool run under a different user? this user could have sudo privileges for only the commands I need? How do things like plesk and cPanel do this? Any thoughts?

    Read the article

  • Permissions Required for Sharepoint Backups

    - by Wyatt Barnett
    We are in the process of rolling out an extranet for some of our partners using WSS 3.0 as the platform. We already use it internally for a variety of things, and we are using the following powershell script to backup the server: param( $url="http://localhost", $backupFolder="c:\" ) [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site= new-Object Microsoft.SharePoint.SPSite($url) $names=$site.WebApplication.Sites.Names foreach ($name in $names) { $n2 = "" if ($name.Length -eq 0) { $n2="ROOT" } else { $n2 = $name } $tmp=$n2.Replace("/", "_") + ".sbk" $saveas = "" if ($backupFolder.Length -eq 0) { $saveas = $tmp } else { $saveas = join-path -path $backupFolder -childPath $tmp } $site.WebApplication.Sites.Backup($name, $saveas, "true") write-host "$n2 backed up to $saveas." } This script works perfectly on the current installation running as our domain backup user. On the new box, it fails when ran as the backup user--claiming "The web application located at http://extranet/" could not be found. That url does, in fact, work so I'm fairly certain it isn't anything that dumb and rather is some permissions issue. Especially because, when executed from my security context, the script works perfectly. I have tried making the backup user a farm owner, as well as added him to the various site collection admin groups on the extranet. The one major difference between the extranet and the intranet server is that the extranet has an alternative access mapping (for https://xnet.example.com) and also uses forms authentication for that mapping. Anyhow, what permissions (or other voodoo) do I need to setup to get this script to work properly?

    Read the article

  • Slow boot for OS and external devices

    - by Derek Van Cuyk
    I have been having this problem intermittently but as of yesterday, it has become more consistent. It originally started when I rebooted my PC at home and the OS (Windows 8) sat in a loop appearing to do nothing while loading. I figured since this was a new installation, that something may have just become corrupted and I decided to reinstall. So I tried to boot off of the thumb drive which had the installation iso and encountered pretty much the same issue. Same with the DVD drive. So, I rebooted once again and left it to load the entire night just to see if it ever would and sure enough this morning, Windows had finally loaded. Authentication had the same roblem albeit not quite as long (took about 5 minutes to authenticate). However, once I was in, everything appeared to be working fine and as quick as normal with the exception of when I tried to scan the C drive for any errors, which ran unbearably slow (45 minutes and before I left for work and was not finished scanning a 64GB SSD drive). I mention that I have had this issue but never when loading the OS. Before it occurred when trying to install windows 7 from a different DVD drive than the one I have now. It took me about 3 hours to do it since I had to wait sometimes 30+ min for each step to finish processing. Does anyone have an idea as to what can cause this? I am assuming it is the motherboard since it is responsible for communication with all the devices I'm having issues with but I cannot find anyone else who has had a problem like this and don't want to drop more money on a MB if it isn't the problem. Hardware: Motherboard: Asus M4A78T-E Socket AM3/ AMD 790GX/ Hybrid CrossFireX Hard Drive: Kingston SSDNow V+180 64GB Micro SATA II 3GB/S 1.8 Inch Solid State Drive SVP180S2/64G Optical Drive: Samsung Blu-Ray Combo Internal 12XReadable and DVD-Writable Drive with Lightscribe SH-B123L/BSBP Thanks, Derek

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >