Search Results

Search found 14267 results on 571 pages for 'security certificate'.

Page 59/571 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Using client certificates with wget

    - by Doc
    I cannot get wget to use the client certificates. The documentation speaks about using the --certificate flag. The use of the certificate flag is clear, I set it to use the PEM version of the client certificate. But when I connect I get the following error: HTTP request sent, awaiting response... Read error (error:14094410:SSL routines: SSL3_READ_BYTES:sslv3 alert handshake failure; error:140940E5:SSL routines:SSL3_ READ_BYTES:ssl handshake failure) in headers. Giving up. ssl handshake failure means the client did not supply a correct client cert. Still the client cert I use, works in a browser. Note: When I disable client authentication on the server, wget can connect. Note: The use of curl is suggested, but I'd like to avoid the switch.

    Read the article

  • Bad certificate error with RabbitMQ using SSL

    - by David Tinker
    I am trying to get RabbitMQ working with SSL on a couple of Gentoo servers. I get the following error in /var/log/rabbitmq/[email protected] when I try to connect to the management console using https: SSL: certify: ssl_connection.erl:1641:Fatal error: bad certificate I followed the instructions here: http://www.rabbitmq.com/ssl.html The annoying thing is that I have 2 cloned servers and it is working on one and not the other. As far as I can tell the machines are configured identically. I wrote a script to generate the certs etc. and have run it on both. I am not using client certificates. Anyone know how I can figure out whats wrong with my certificate(s)? I am using Erlang 15.2, RabbitMQ 2.7.9, OpenSSL 0.9.8k.

    Read the article

  • Using client certificates with wget

    - by Doc
    I cannot get wget to use the client certificates. The documentation speaks about using the --certificate flag. The use of the certificate flag is clear, I set it to use the PEM version of the client certificate. But when I connect I get the following error: HTTP request sent, awaiting response... Read error (error:14094410:SSL routines: SSL3_READ_BYTES:sslv3 alert handshake failure; error:140940E5:SSL routines:SSL3_ READ_BYTES:ssl handshake failure) in headers. Giving up. ssl handshake failure means the client did not supply a correct client cert. Still the client cert I use, works in a browser. Note: When I disable client authentication on the server, wget can connect. Note: The use of curl is suggested, but I'd like to avoid the switch.

    Read the article

  • How to set up an SSL Cert with Subject Alternative Name

    - by Darren Oster
    To test a specific embedded client, I need to set up a web server serving a couple of SSL (HTTPS) sites, say "main.mysite.com" and "alternate.mysite.com". These should be handled by the same certificate, with a Subject Name of "main.mysite.com" and a Subject Alternative Name of "alternate.mysite.com". This certificate needs to be in an authority chain back to a 'proper' CA (such as GoDaddy, to keep the cost down). My question is, are there any good tutorials on how to do this, or can someone explain the process? What sort of parent certificate do I need to purchase from the CA provider? My understanding of SSL certificates is limited, but as Manuel said in Fawlty Towers, "I learn...". I'm happy to work in Windows (IIS) or Linux (Apache) (or even OSX, for that matter). Thanks in advance.

    Read the article

  • Convert svn repository to hg - authentication fails

    - by Kim L
    I'm trying to convert an existing svn repository to a mercurial repo with the following command hg convert <repository> <folder> My problem is that the svn repository's authentication is done with p12 certificates. I'm a bit lost on how to configure the certificate for the hg client so that I can pull the svn repo and convert it. Currently, if I try to run the above command, I get initializing destination hg-client repository abort: error: _ssl.c:480: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure In other words, it cannot find the required certificate. The question is, how do I configure my hg client so that it can use my certificate? I'm using the command line hg client on linux.

    Read the article

  • How do I set up an sftp user to login with a password to an EC2 ubuntu server ?

    - by Doron
    Hello, I have an Ubuntu Server running on an EC2 instance. To login to that server I use a certificate file without any password. I've installed and configured vsftpd and created a user (let's call him "testuser") for which I've set a /bin/false ssh terminal so it will only be able to connect via sftp and upload/access files on his home directory. However - when I try to connect to the server from my computer, running sftp testuser@my-ec2-server I get Permission denied (publickey). Connection closed messages so I can't log in. How can I remove the certificate requirement for this user only (meaning, the "ubuntu" user will still have to use the certificate file to login via ssh), so normal sftp clients will be able to connect using a username and a password ? Thank you. PS Using Ubuntu Server 10.10 official AMI from canonical, 64bit on a micro instance.

    Read the article

  • Extract Certs from Apache

    - by user271619
    Recently I've had to uninstall a single Self-Signed SSL Certificate from one of my Apache boxes, specifically for an outside party. That's not really a problem for me, since it was easy. What confuses me is how they knew I had a self-signed certificate. The domain I provided them was not related to the domain with the self-signed certificate. Does this mean Apache publicizes the Virtual hosts in the httpd.conf file? I asked the outside party what software they used to extract information from my server, and they provided this GitHub link: https://gist.github.com/4ndrej/4547029 I figured I'd ask the community first, before I attempt installing the Java program.

    Read the article

  • SSL Certificate only works when session active in Server 2008

    - by CodeMonkey1
    I have a web app that uses an installed certificate to send a web request to a 3rd party web service. This has worked for a long time on Windows Server 2003, but just recently we found a problem with it on 2008 installations. When logged into the server as the same user the App Pool uses, either locally or via remote desktop, the web app and it's secure 3rd party request works fine. However, when there are no user sessions open, the 3rd party request fails, as if the certificate were not attached to the web request. Any ideas?

    Read the article

  • Two SSL certificates required for two Apache servers using mod_proxy to serve HTTPS?

    - by Nick
    Our application originally used a single Apache server with mod_perl installed to serve up all HTTPS requests. Due to memory issues I've added a lighter Apache installation and used ProxyPass to hand off the Perl requests to the mod_perl enabled server. We currently have an SSL certificate installed on the mod_perl server but I'm struggling to understand whether we need a certificate for both servers or only the lightweight server which is receiving the original requests. Or can a certificate be used for more than one server on a single machine? Thanks in advance for any help/pointers.

    Read the article

  • Blackberry Security Wipe

    - by GavinR
    What does a Blackberry "Security Wipe" (Options Security Options Security Wipe "emails, Contacts, etc") do? a) If I have an Enterprise Activation with my employer will a security wipe remove this? b) Will my phone still ring when my number is called or do I have to re-activate with my carrier?

    Read the article

  • Can't upgrade ubuntu 9.xx to 12.04

    - by andrej spyk
    I can't upgrade old Ubuntu 9.10 to new, if I check for upgrade it says: Could not download all repository indexes *Failed to fetch ttp://security.ubuntu.com/ubuntu/dists/jaunty-security/main/binary-i386/Packages 404 Not Found Failed to fetch ttp://security.ubuntu.com/ubuntu/dists/jaunty-security/restricted/binary-i386/Packages 404 Not Found Failed to fetch ttp://security.ubuntu.com/ubuntu/dists/jaunty-security/main/source/Sources 404 Not Found Failed to fetch ttp://security.ubuntu.com/ubuntu/dists/jaunty-security/restricted/source/Sources 404 Not Found Failed to fetch ttp://cz.archive.ubuntu.com/ubuntu/dists/jaunty/main/binary-i386/Packages 404 Not Found Failed to fetch ttp://cz.archive.ubuntu.com/ubuntu/dists/jaunty/restricted/binary-i386/Packages 404 Not Found Failed to fetch ttp://cz.archive.ubuntu.com/ubuntu/dists/jaunty/main/source/Sources 404 Not Found Failed to fetch ttp://security.ubuntu.com/ubuntu/dists/jaunty-security/universe/binary-i386/Packages 404 Not Found Failed to fetch ttp://security.ubuntu.com/ubuntu/dists/jaunty-security/universe/source/Sources 404 Not Found Failed to fetch ttp://cz.archive.ubuntu.com/ubuntu/dists/jaunty/restricted/source/Sources 404 Not Found Failed to fetch ttp://cz.archive.ubuntu.com/ubuntu/dists/jaunty/universe/binary-i386/Packages 404 Not Found Failed to fetch http://cz.archive.ubuntu.com/ubuntu/dists/jaunty/universe/source/Sources 404 Not Found Failed to fetch ttp://security.ubuntu.com/ubuntu/dists/jaunty-security/multiverse/binary-i386/Packages 404 Not Found Failed to fetch ttp://cz.archive.ubuntu.com/ubuntu/dists/jaunty/multiverse/binary-i386/Packages 404 Not Found Failed to fetch tp://cz.archive.ubuntu.com/ubuntu/dists/jaunty/multiverse/source/Sources 404 Not Found Failed to fetch htp://cz.archive.ubuntu.com/ubuntu/dists/jaunty-updates/main/binary-i386/Packages 404 Not Found Failed to fetch ttp://cz.archive.ubuntu.com/ubuntu/dists/jaunty-updates/restricted/binary-i386/Packages 404 Not Found Failed to fetch ttp://security.ubuntu.com/ubuntu/dists/jaunty-security/multiverse/source/Sources 404 Not Found Failed to fetch ttp://cz.archive.ubuntu.com/ubuntu/dists/jaunty-updates/main/source/Sources 404 Not Found Failed to fetch ttp://cz.archive.ubuntu.com/ubuntu/dists/jaunty-updates/restricted/source/Sources 404 Not Found Failed to fetch http://cz.archive.ubuntu.com/ubuntu/dists/jaunty-updates/universe/binary-i386/Packages 404 Not Found Failed to fetch ttp://cz.archive.ubuntu.com/ubuntu/dists/jaunty-updates/universe/source/Sources 404 Not Found Failed to fetch ttp://cz.archive.ubuntu.com/ubuntu/dists/jaunty-updates/multiverse/binary-i386/Packages 404 Not Found Failed to fetch ttp://cz.archive.ubuntu.com/ubuntu/dists/jaunty-updates/multiverse/source/Sources 404 Not Found Some index files failed to download, they have been ignored, or old ones used instead.* How I can upgrade if I can't burn new CD?

    Read the article

  • Is Windows Server 2008R2 NAP solution for NAC (endpoint security) valuable enough to be worth the hassles?

    - by Warren P
    I'm learning about Windows Server 2008 R2's NAP features. I understand what network access control (NAC) is and what role NAP plays in that, but I would like to know what limitations and problems it has, that people wish they knew before they rolled it out. Secondly, I'd like to know if anyone has had success rolling it out in a mid-size (multi-city corporate network with around 15 servers, 200 desktops) environment with most (99%) Windows XP SP3 and newer Windows clients (Vista, and Win7). Did it work with your anti-virus? (I'm guessing NAP works well with the big name anti-virus products, but we're using Trend micro.). Let's assume that the servers are all Windows Server 2008 R2. Our VPNs are cisco stuff, and have their own NAC features. Has NAP actually benefitted your organization, and was it wise to roll it out, or is it yet another in the long list of things that Windows Server 2008 R2 does, but that if you do move your servers up to it, you're probably not going to want to use. In what particular ways might the built-in NAP solution be the best one, and in what particular ways might no solution at all (the status quo pre-NAP) or a third-party endpoint security or NAC solution be considered a better fit? I found an article where a panel of security experts in 2007 say NAC is maybe "not worth it". Are things better now in 2010 with Win Server 2008 R2?

    Read the article

  • Is encryption really needed for having network security? [closed]

    - by Cawas
    I welcome better key-wording here, both on tags and title. I'm trying to conceive a free, open and secure network environment that would work anywhere, from big enterprises to small home networks of just 1 machine. I think since wireless Access Points are the most, if not only, true weak point of a Local Area Network (let's not consider every other security aspect of having internet) there would be basically two points to consider here: Having an open AP for anyone to use the internet through Leaving the whole LAN also open for guests to be able to easily read (only) files on it, and even a place to drop files on Considering these two aspects, once everything is done properly... What's the most secure option between having that, or having just an encrypted password-protected wifi? Of course "both" would seem "more secure". But it shouldn't actually be anything substantial. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even consider hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what building walls is for the cables. And giving pass-phrases would be adding a door with a key. So, what do you think?

    Read the article

  • How to disable irritating Office File Validation security alert?

    - by Rabarberski
    I have Microsoft Office 2007 running on Windows 7. Yesterday I updated Office to the latest service pack, i.e. SP3. This morning, when opening an MS Word document (.doc format, and a document I created myself some months ago) I was greeted with a new dialog box saying: Security Alert - Office File Validation WARNING: Office File Validation detected a problem while trying to open this file. Opening this is probably dangerous, and may allow a malicious user to take over your computer. Contact the sender and ask them to re-save and re-send the file. For more security, verify in person or via the phone that they sent the file. Including two links to some microsoft blabla webpage. Obviously the document is safe as I created it myself some months ago. How to disable this irritating dialog box? (On a sidenote, a rethorical question: Will Microsoft never learn? I consider myself a power user in Word, but I have no clue what could be wrong with my document so that it is considered dangerous. Let alone more basic users of Word. Sigh....)

    Read the article

  • SharePoint extranet security concerns, am I right to be worried?

    - by LukeR
    We are currently running MOSS 2007 internally, and have been doing so for about 12 months with no major issues. There has now been a request from management to provide access from the internet for small groups (initially) which are comprised of members from other Community Organisations like ours. Committees and the like. My first reaction was not joy when presented with this request, however I'd like to make sure the apprehension is warranted. I have read a few docs on TechNet about security hardening with regard to SharePoint, but I'm interested to know what others have done. I've spoken with another organisation who has already implemented something similar, and they have essentially port-forwarded from the internet to their internal production MOSS server. I don't really like the sound of this. Is it adviseable/necessary to run a DMZ type configuration, with a separate web front-end on a contained network segment? Does that even offer me any greater security than their setup? Some of the configurations from a TechNet doc aren't really feasible, given our current network budget. I've already made my concerns known to management, but it appears it will go ahead in some form or another. I'm tempted to run a completely isolated, seperate install just for these types of users. Should I even be concerned about it? Any thoughts, comments would be most welcomed at this point.

    Read the article

  • Security implications of adding www-data to /etc/sudoers to run php-cgi as a different user

    - by BMiner
    What I really want to do is allow the 'www-data' user to have the ability to launch php-cgi as another user. I just want to make sure that I fully understand the security implications. The server should support a shared hosting environment where various (possibly untrusted) users have chroot'ed FTP access to the server to store their HTML and PHP files. Then, since PHP scripts can be malicious and read/write others' files, I'd like to ensure that each users' PHP scripts run with the same user permissions for that user (instead of running as www-data). Long story short, I have added the following line to my /etc/sudoers file, and I wanted to run it past the community as a sanity check: www-data ALL = (%www-data) NOPASSWD: /usr/bin/php-cgi This line should only allow www-data to run a command like this (without a password prompt): sudo -u some_user /usr/bin/php-cgi ...where some_user is a user in the group www-data. What are the security implications of this? This should then allow me to modify my Lighttpd configuration like this: fastcgi.server += ( ".php" => (( "bin-path" => "sudo -u some_user /usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) ...allowing me to spawn new FastCGI server instances for each user.

    Read the article

  • Using Plesk for webhosting on Ubuntu - Security risk or reasonably safe?

    - by user66952
    Sorry for this newb-question I'm pretty clueless about Plesk, only have limited debian (without Plesk) experience. If the question is too dumb just telling me how to ask a smarter one or what kind of info I should read first to improve the question would be appreciated as well. I want to offer a program for download on my website hosted on an Ubuntu 8.04.4 VPS using Plesk 9.3.0 for web-hosting. I have limited the ssh-access to the server via key only. When setting up the webhosting with Plesk it created an FTP-login & user is that a potential security risk that could bypass the key-only access? I think Plesk itself (even without the ftp-user-account) through it's web-interface could be a risk is that correct or are my concerns exaggerated? Would you say this solution makes a difference if I'm just using it for the next two weeks and then change servers to a system where I know more about security. 3.In other words is one less likely to get hacked within the first two weeks of having a new site up and running than in week 14&15? (due to occurring in less search results in the beginning perhaps, or for whatever reason... )

    Read the article

  • CIFS - Default security mechanism requested (Mounted Share)

    - by André Faria
    The following message appear every time I reboot/boot my ubuntu 12.04.1 CIFS VFS: default security mechanism requested. The default security mechanism will be upgraded from nbtlm to ntlmv2 in kernel realese 3.3 I'am searching for a solution, if there is one for this message, I really don't understand it. Following my fstab //192.168.0.10/D$/ /mnt/winshare/ cifs user,file_mode=0777,dir_mode=0777,rw,gid=1000,credentials=/root/creds 0 0 I can use my mounted folder with no problem, I just want to know why this message is appearing and if have something that I can do to fix this problem or hide this warning. Thanks

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >