Search Results

Search found 6253 results on 251 pages for 'apache2 ssl'.

Page 228/251 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • Secure password transmission over unencrypted tcp/ip

    - by academicRobot
    I'm in the designing stages of a custom tcp/ip protocol for mobile client-server communication. When not required (data is not sensitive), I'd like to avoid using SSL for overhead reasons (both in handshake latency and conserving cycles). My question is, what is the best practices way of transmitting authentication information over an unencrypted connection? Currently, I'm liking SRP or J-PAKE (they generate secure session tokens, are hash/salt friendly, and allow kicking into TLS when necessary), which I believe are both implemented in OpenSSL. However, I am a bit wary since I don't see many people using these algorithms for this purpose. Would also appreciate pointers to any materials discussing this topic in general, since I had trouble finding any.

    Read the article

  • UrlRewriteFilter Direct to https

    - by adi
    I am using UrlRewriteFilter to redirect to SSL. I am running Glassfishv2. My rule looks something like this now. It's in my urlrewrite.xml in WEB-INF of my war folder. Is there any other glassfish setting that needs to be set? <rule> <condition name="host" operator="notequal">https://abc.def.com</condition> <condition name="host" operator="notequal">^$</condition> <from>^/(.*)</from> <to type="permanent-redirect" last="true">https://abc.def.com/ghi/$1</to> </rule> But FF keeps saying that the URL redirect rule is in a way that it will never complete. I am not exactly sure whats happening here. any ideas?

    Read the article

  • What difference does it make to use several script blocks on a web page?

    - by Jan Aagaard
    What difference does it make to use more than one script block on a web page? I have pasted in the standard code for including Google Analytics as an example, and I have seen the same pattern used in other places. Why is this code separated into two separate script blocks instead of just using a single one? <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try{ var pageTracker = _gat._getTracker("UA-xxxxxx-x"); pageTracker._trackPageview(); } catch(err) {} </script>

    Read the article

  • How would I change the DocumenRoot on the version of Apache that came pre-installed on my Mac OS X s

    - by racl101
    OK, so I want to take advantage of the Apache server that comes installed on my Mac OS X system (which means, I would like not to have to install my own version of Apache since I might as well tryto use what comes bundled), and as such, I went to change some settings in the configuration file: /etc/apache2/httpd.conf Namely, I changed the these two lines: DocumentRoot "/Users/myusername/Sites" <Directory "/Users/myusername/Sites"> So that they initially pointed to a folder in my Dropbox folder (so I could have my docs sync to my Dropbox): DocumentRoot "/Users/myusername/Dropbox/public_html" <Directory "/Users/myusername/Dropbox/public_html"> That didn't work. So then I figured, ok maybe it was too much to ask to make folder in my Dropbox be my document root. So then I thought, what if I make the Document root another folder of my choosing like so: DocumentRoot "/Users/myusername/dev-sites/public_html" <Directory "/Users/myusername/dev-sites/public_html"> and that didn't work either. After looking within the httpd.conf file for clues it seems that only two directories appear to work as Document root paths for the Apache that comes bundled with Mac OS X: /Users/myusername/Sites (or ~/Sites) and /Library/WebServer/Documents/ But trying to use any other directories didn't seem to work. I would get 403 errors on my browser. I was wondering if there was some other settings to change on the httpd.conf file or any permissions to set to make this work. Any help would be appreciated and many thanks in advance.

    Read the article

  • Viewing local websites on my iOS device over Wi-fi

    - by John
    Trying to view some local html/css/js files in a mobile browser on my iOS device. Thought maybe file-sharing would be an option, and is, but I'm not completely satisfied with it. Any time I try to do the following an error occurs. Web sharing is on and available at http://192.168.1.101/~user but I have to manually copy the files in. If I try to symlink a folder in so that the address could be viewed at ''~user/some_dir by issuing $ ln -s /Users/user/dev/some_dir ~/Sites/ then I get a 403 forbidden error. I've tried to remedy this by modifying a user.conf file in /private/etc/apache2/ and using the following syntax: <Directory "/Users/user/Sites/"> Options Indexes MultiViews SymLinks AllowOverride None Order allow,deny Allow from all </Directory> but nope, still doesn't work. I get a 403 error. If I try to symlink each individual file in instead of using a directory as a sub-directory, same error. Any help would be greatly appreciated! I'd just like to symlink directories into the ~/Sites one and browse them on my iOS device over wifi. I'm on OS X 10.7 Lion trying to connect with iOS 5.

    Read the article

  • SSL_accept hangs... sometimes ( C, linux, openssl )

    - by zbigh
    I'm currently working on an embedded linux system. There are two crucial client applications on the system that connect to an external server ( on another embedded system, all written in C ). The two apps use different certificates. The ssl connection works... At least usually, but from time to time an error occures: the server hangs on SSL_accept() when accepting connection from one of the applications - the one using older certificates. Restarting the server application does not help, nor does restarting the client - the only way is to reboot the server system, unless I create a symbilic link to the new certificates used by the other app - only then will restarting the server app work. Never does the error occur when both applications use the same, new certificate. Could this happen due to some strange openssl cache or something like that?

    Read the article

  • grails mail connection refused

    - by mkoryak
    it seems i have tried the mail config in the way that its docs said, but still i get: Error 500: Executing action [x] of controller [x] caused exception: Mail server connection failed; nested exception is javax.mail.MessagingException: Could not connect to SMTP I am using google apps for my email so [email protected] is using gmail. i cannot get grails to send out a test message on my dev box (win 7). my config is: host = "smtp.gmail.com" port = 465 username = "[email protected]" password = "x" props = ["mail.smtp.auth":"true", "mail.smtp.debug":"true", "mail.smtp.starttls.enable":"true", "mail.smtp.socketFactory.port":"465", "mail.smtp.socketFactory.class":"javax.net.ssl.SSLSocketFactory", "mail.smtp.socketFactory.fallback":"false"]

    Read the article

  • Apache, Permissions, and Convenience

    - by Mike
    I'm on Mac OSX and i I have apache2 installed via MacPorts, running as the _www user. I have some files I want to serve in the /Users/Me/Documents/abc folder. Right now, though, the permissions of /Users/Me/Documents are 700. So, _www can't get in, even if abc is chmod 777. I recognize the following options: Allow _www access to my Documents folder. Put the files I want to share outside of my Documents folder. Hard-link the files outside of my Documents folder, and point apache to the hard links. None of these solutions are acceptable to me, however. I don't feel safe allowing _www access to my entire Documents folder. I really want to keep the files in my Documents folder for other reasons. The files are changing all the time, so hard-linking would not always reflect the right file structure, and, as I understand it, you can't hard-link a directory (though, if you could, that would solve it). Any ideas for a solution? Is there a way to run a few httpd processes as my user account so it can get in there? Or, is there some way to hard-link a directory, or some way to get httpd to follow a symlink past a directory that is 700 not owned by _www? Thanks!

    Read the article

  • Restrict access to one SVN repository (overwrite default)

    - by teel
    I'm trying to set up our SVN server so that by default the group developers will have access to all repositories, but I want to override that setting on some certain repositories where I want to allow access only to single defined users (or separate groups) The current configuration is SVN + WebDAV on Apache2. All my repositories are located at /var/lib/svn/ In dav_svn.authz I currently have [/] @developers = rw @users = r Now I want to add one repository (let's call it secret_repo) that would only allow access to one user who is also a member of the developers group.¨ I tried to do [secret_repo:/] * = secret_user = rw Where secret_user is the user I'd like to give access to the repository, but it doesn't seem to work. Currently the server is using Apache's LDAP module to authenticate users from our active directory domain and I'd like to keep it that way if possible. Also I seem to be able to browse all my repos freely with any web browser, which I'd like to block. Second problem is that I have webSVN on the server, which is using Apache's LDAP authentication. Everyone who is a member of our domain can access it, so I'd like to hide this secret_repo from websvn listing. It's configured not with parentPath("/var/lib/svn");. Do I really need to remove that and add every repository separately, except the ones I want to hide?

    Read the article

  • PHP: How to check for response code?

    - by Tom
    Hi, I'm a relative PHP newbie implementing a PayPal IPN listener and all seems to be working fine, except I dont really know how to check for a response code. I've tried something ugly with cURL but it doesn't work at all (I'm not understanding cURL). I've tried this piece of code that I grabbed from somewhere on the net: $fp = fsockopen('ssl://www.sandbox.paypal.com', 443, $errno, $errstr, 30); $response_headers = get_headers($fp); $response_code = (int)substr($headers[0], 9, 3); ... but it's not working (returns $response_code = 0). So right now, I'm debugging my IPN code without checking for a Response 200. Can anyone more experienced advise me on what's the proper/simple way to check this? Thanks

    Read the article

  • Google App Engine says "Must authenticate first." while trying to deploy any app

    - by Oleksandr Bolotov
    Google App Engine says "Must authenticate first." while trying to deploy any app: me@myhost /opt/google_appengine $ python appcfg.py update ~/sda2/workspace/lyapapam/ Application: lyapapam; version: 1. Server: appengine.google.com. Scanning files on local disk. Scanned 500 files. Scanned 1000 files. Initiating update. Email: <my_email_was_here>@gmail.com Password for <my_email_was_here>@gmail.com: Error 401: --- begin server output --- Must authenticate first. --- end server output --- We are getting this message with any appliation and under any developer account avialable to us That's what we have installed: App Engine SDK - 1.3.2 PIL - 1.1.7 Python - 2.5.5 pip - 0.6.3 ssl - 1.15 wsgiref - 0.1.2 How can I fix it? Is it well known problem?

    Read the article

  • Make Codeigniter ignore directory

    - by Noah Goodrich
    I have Codeigniter installed and working for my main site. But I am now trying to add an add-on domain to the same hosting account, so I can have two sites running on the same hosting. Add-on domains make a new folder in the main public_html folder to store the web files. How can I get Codeigniter to ignore this directory? The site doesn't load properly when I try and view it. I have an SSL on the main site too and redirection for www URLS. Here's my .htaccess file: RewriteEngine on Options +FollowSymLinks RewriteBase / RewriteCond %{HTTP_HOST} ^www\.mysite\.co.uk$ [NC] RewriteRule ^(.*)$ http://mysite.co.uk/$1 [L,R=301] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /index.php/$1 RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} (site|sections|here) RewriteRule ^(.*)$ https://%{SERVER_NAME}%{REQUEST_URI} [R=301,L] RewriteCond %{HTTPS} onsite|sections|here) RewriteRule ^(.*)$ http://%{SERVER_NAME}%{REQUEST_URI} [R=301,L]

    Read the article

  • How do I get Apache 2 to read this directory?

    - by Mike
    I'm on Mac OSX and i I have apache2 installed via MacPorts, running as the _www user. I have some files I want to serve in the /Users/Me/Documents/abc folder. Right now, though, the permissions of /Users/Me/Documents are 700. So, _www can't get in, even if abc is chmod 777. I recognize the following options: Allow _www access to my Documents folder. Put the files I want to share outside of my Documents folder. Hard-link the files outside of my Documents folder, and point apache to the hard links. None of these solutions are acceptable to me, however. I don't feel safe allowing _www access to my entire Documents folder. I really want to keep the files in my Documents folder for other reasons. The files are changing all the time, so hard-linking would not always reflect the right file structure, and, as I understand it, you can't hard-link a directory (though, if you could, that would solve it). Any ideas for a solution? Is there a way to run a few httpd processes as my user account so it can get in there? Or, is there some way to hard-link a directory, or some way to get httpd to follow a symlink past a directory that is 700 not owned by _www? Thanks!

    Read the article

  • imap_open dies when being called

    - by blauwblaatje
    Hi, I've got the following code: The script dies. I get zero responds, nothing from apache, no "foo" or "bar", nothing. I can however connect to the imap server (nc localhost ...), I can also put the script on another server and connect to the same imap server. So, I think there's something wrong with the php on this server. But I can't figure out what I'm missing, forgetting or didn't install. phpinfo() tells me php is configured --with-imap and --with-imap-ssl. The OS is CentOS, btw.

    Read the article

  • What does $this generally mean in PHP?

    - by xczzhh
    I am really new to PHP, so I am kind of confused seeing these different operators all day. Here is some code I came across when watching a video tutorail, I'd appreciate it if some could explain a little bit: function index() { $config = Array( 'protocol' => 'smtp', 'smtp_host' => 'ssl://smtp.googlemail.com', 'smtp_port' => 465, 'smtp_user' => '[email protected]', 'smtp_pass' =>'password', ); $this->load->library('email', $config); $this->email->set_newline("\r\n"); $this->email->from('[email protected]', 'Jerry'); $this->email->to('[email protected]'); $this->email->subject('this is an email test'); $this->email->message('this is test message!'); if($this->email->send()) { echo 'Your email was sent'; } else { show_error($this->email->print_debugger()); } } ...

    Read the article

  • Allow SFTP in iptables

    - by Kevin Orriss
    I have just purchased a VPS from linode and am going through the setup guide. I have everything running (apache2, php, mysql etc) but I am being denied access via SFTP when using fileZilla to upload a file. Now this is my second time installing the server as I missed a section out the first time. I was able to connect to my server through SFTP on filezilla the first time and the thing I missed out was adding a new user and editing the iptables in the firewall. So it would seem that the guide I have been following has blocked SFTP but allowed SSH. Here is the iptables file: *filter # Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accept all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow all outbound traffic - you can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allow SSH connections # # The -dport number should be the same port number you set in sshd_config # -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # Log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT All I would like is a line I need to put in there which allows SFTP over port 22. Thank you for reading this.

    Read the article

  • RewriteCond and RewriteRule in .htaccess

    - by RD
    I have a client folder located at http://www.example.com/client However, I've now installed SSL on the server, and want to add a permanent redirect using HTACCESS so that whenever /client is accessed, that it redirects to: https://www.example.com/client Anybody know how to do that? I've redirected my domains in the past like this: RewriteCond %{HTTP_HOST} ^example\.com$ [NC] RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301] RewriteCond %{HTTP_HOST} ^www.example\.com$ [NC] RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301] This should not affect the solution, but the site must still redirect to www.example.com FIRST, and then to https://www.example.com/client if for example, http://www.example.co.za/client is entered.

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • apache front-end rewriting URL to different https ports?

    - by khedron
    Hi all, One of my users is having some trouble with forwarding to an internal web app from a public address. Everything worked fine for him when the situation was like this: front page: http://www.myexample.com/ public ref to internal app: http://www.example.com/app-8903/app.html secretly goes to: http://secret.example.com:8903/app-8903/app.html This is to say, my user is providing the very last URL, with the port information duplicated in the URL base, and they were using that to give a public face that hid both the port and the internal machine name. You could still read the port in the URL base if you looked, but the obvious reference and machine name were hidden. Doing it this way, he could have several different instances of the application running on secret.example.com with different ports, and on the front end it just looked like it was changing the URL directory/base. Now the user wants to do the same thing over https:, and the people helping him with apache config say it can't be done. Is that so? Without being there to tinker with the configuration myself, I'm not sure what his IT people have tried, but reading through the apache2 SSL FAQ and other docs, it seems like it should be possible to rewrite URLs to different ports and still use https:.

    Read the article

  • Hash passwords before transmitting? (web)

    - by wag2639
    I was reading this Ars article on password security and it mentioned there are sites that "hash the password before transmitting"? Now, assuming this isn't using an SSL connection (HTTPS), a. is this actually secure and b. if it is how would you do this in a secure manor? Edit 1: (some thoughts based on first few answers) c. If you do hash the password before transmission, how do you use that if you only store a salted hash version of the password in your user credentials databas? d. Just to check, if you are using a HTTPS secured connection, is any of this necessary?

    Read the article

  • CentOS: AJAX/jQuery not working

    - by Australiya
    In a nutshell, I have an unmanaged VPS. At one time, it had Ubuntu 10.10 server on it, then I reinstalled it with CentOS 6 and updated it to CentOS 6.2. Now, the problem is, the AJAX/jQuery shoutbox has ceased working (I assume it uses one of the two to inject itself into a div and then refresh when new messages are posted, I'm not sure, I didn't write these), and the plug-board script now shows me a lime green blank page. No changes to the source codes have been made, and they are in the location they expect themselves to be. I have Apache2, MySQL 5, PHP 5, and I did install the php-xml libraries. What am I missing? It's gotta be server side because the scripts themselves are fine, if I move them to a different server they work just fine. I'm not getting any errors related to this in the error_log file. Thanks in advance! Edit: If you want, you can look at the plugboard at kazeshini.net/plugboard and there's an installation of the chatbox at silverlotus.kazeshini.net/yshout/example, I know nothing about scripts and debugging so better someone else looks at it than someone that doesn't know what they're looking for.

    Read the article

  • Javascript + IFrame + Chrome + Https = strange issue

    - by GuiDoody
    Using a modal plugin for jquery, I'm opening an iframe in a modal with a click event and setting the source attribute to a relative url (should be within https). The page containing the click is an authenticated ssl page. In IE and Safari, this works as expected. In Chrome and Firefox, the source url is opened in http not https. If I set the source to an absolute url using https, same result. Example: // This doesn't work - resolves to http $('.lnkViewDetails').click(function(e){ var src = "/ThePage"; $.modal('<iframe src="' + src + '" height="500" width="425" style="border:0" frameborder="no">'); }); // This doesn't work - also resolves to http $('.lnkViewDetails').click(function(e){ var src = "https://MySite.com/ThePage"; $.modal('<iframe src="' + src + '" height="500" width="425" style="border:0" frameborder="no">'); }); Anyone know a way to get around this?

    Read the article

  • Prevent Apache from answering invalid requests

    - by nickdnk
    I have an Apache web-server that acts as a web front-end for iPhone and iPad applications that communicate by POST and JSON only. Is there any way to prevent Apache from answering requests that are invalid? I can see my error log is filled with attempts to open files such as /admin.php /index.php etc - files that don't exist on my server. I believe this is possible with IIS, but can you do the same thing with Apache? Basically I want the request to appear timed out unless you post exactly the right content to the right file - or at least if you do not request an existing file. This would make the server appear non-existing to everyone but my iPhone users as all communication is SSL and directories are not really guess-able. I did disable the ServerTokens and all that, but I still get File not found etc. when I access the server requesting a random file, which is what these bots do constantly.

    Read the article

  • Apache crash on launch - W2008 Server

    - by user1634444
    I installed Xampp on my Windows Server 2008. It worked fine, untill I decided to install some updates. Now Apache doesn't start any more and I get these errors; [Wed Aug 29 23:31:20.328125 2012] [core:warn] [pid 1540:tid 312] AH00098: pid file C:/xampp/apache/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Wed Aug 29 23:31:20.968750 2012] [ssl:warn] [pid 1540:tid 312] AH01873: Init:Session Cache is not configured [hint: SSLSessionCache] I'm trying to install Cacti on the server to monitor everything... Don't think it's relevant, but just saying

    Read the article

  • what's the javascript "var _gaq = _gaq || []; " for ?

    - by parvas
    The Async Tracking in google analytics looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXX-X']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); About The first line var _gaq = _gaq || []; I think it ensures that if _gaq is already defined we should use it otherwise we should an array. Can anybody explain what this is for ? Also, does it matter if _gaq gets renamed ? in other words does google analytics rely on a global object named _gaq ? Thanks Parvas

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >