Search Results

Search found 14878 results on 596 pages for 'mod security'.

Page 23/596 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • mod_fcgi produces random 500 Errors

    - by DmitrySemenov
    php 5.4.7 via mod_fcgi when I run the site sometimes it works, sometimes it crashed with 500 Internal Error, this is what I see in error.log everytime I run the script [Mon Sep 24 18:50:43 2012] [warn] [client 68.231.194.198] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Mon Sep 24 18:50:43 2012] [error] [client 68.231.194.198] Premature end of script headers: api.php any ideas? vhost config: <VirtualHost :80> ServerAdmin [email protected] DocumentRoot "/home/www/sites/test.com/html/development" ServerName test.com ServerAlias www.test.com ErrorLog "/home/www/sites/test.com/logs/error_log" CustomLog "/home/www/sites/test.com/logs/access_log" common <IfModule mod_fcgid.c> <Directory /home/www/sites/test.com/html/development> Options +ExecCGI AllowOverride All AddHandler fcgid-script .php FCGIWrapper /home/www/php-fcgi-scripts/php-fcgi-starter .php Order allow,deny Allow from all </Directory> FcgidMaxRequestLen 1073741824 </VirtualHost> fcgi.d conf LoadModule fcgid_module modules/mod_fcgid.so # Use FastCGI to process .fcg .fcgi & .fpl scripts AddHandler fcgid-script fcg fcgi fpl # Sane place to put sockets and shared memory file FcgidIPCDir /var/run/mod_fcgid FcgidProcessTableFile /var/run/mod_fcgid/fcgid_shm IdleTimeout 300 BusyTimeout 300 ProcessLifeTime 7200 IPCConnectTimeout 300 IPCCommTimeout 7200 PHP_Fix_Pathinfo_Enable 1 php-fcgi-starter.php #!/bin/sh PHP_CGI=/usr/local/php547/bin/php-cgi PHP_INI=/etc/php547-fastcgi.ini export PHP_FCGI_TIMEOUT=1200 #export PHP_FCGI_CHILDREN=6 export PHP_FCGI_MAX_REQUESTS=1000 exec $PHP_CGI -c $PHP_INI

    Read the article

  • Where can I get precompiled mod_perl, mod_python for Apache on Win64?

    - by Soumya92
    I have managed to set up pure 64-bit Apache, PHP, MySQL, and 64-bit distributions of Perl and Pyton. However, I cannot get Apache to automatically parse .pl files with Perl, and .py files with Python. Looking around points to mod_perl and mod_python for Apache, which unfortunately fail to build. Is there any precompiled mod_perl, mod_python for Win64? Or is there any other way of getting .pl, .py to work on Apache?

    Read the article

  • Changing a set-cookie header using mod_rewrite/mod_proxy

    - by olrehm
    I have a bunch of cgi scripts, which are served using HTTPS. They can only be reached on the intranet, not from the outside. They set a cookie with the attribute 'Secure', so that it can only be send via HTTPS. There is also a reverse proxy to one of these scripts, unfortunately using plain HTTP. When a response comes in from my cgi-script with a secure cookie, it is not being passed on via HTTP (after all, that is what that attribute is for). I need however, an exception to this rule. Is it possible to use mod_rewrite/mod_proxy or something similar, to change the set-cookie header in the response coming from my cgi script and remove the Secure, such that the cookie can be passed back to the user using the unsafe HTTP connection? I understand that this defeats the purpose of the Secure in the first place, but I need this as a temporary work around. I have searched the web and found how to add a set-cookie header using mod_rewrite, and I have also found how to retrieve the value of a cookie coming from the client in a cookie header. What I have not yet found is how to extract the set-cookie header received in the response of a script I am proxying for. Is that possible? How would I do that? Ole

    Read the article

  • Is it possible for the Subversion Apache module to serve html files with an html content-type without using the svn:mime-type property?

    - by Martin Pain
    I am aware that if you set the svn:mime-type Subversion property on a .html file to text/html then when viewing the file in a browser through the Subversion module in Apache httpd it will be served with a Content-Type: text/html header, enabling the browser to render it as HTML rather than plain text. However, I am looking for a way to do this without using the svn:mime-type property. I'm aware that you can configure your svn client to automatically add the property - this is not what I want, as I do not want to ensure all users have these settings. I'm also aware that I could create a pre-commit hook that rejects the commit if the properties are not set, in order to force users to set the property - I might fall back to that, but I'm looking for something less intrusive. I'm also aware that I could use a post-commit hook to add the properties automatically on the server-side. I'd rather not do that (as users then have to update immediately after their commit, and it's not trivial to write) - I'm looking for a better alternative. Perhaps something with rewrite rules in the Apache server?

    Read the article

  • How can I diagnose cache misses when using Apache as a reverse proxy?

    - by johnstok
    I have set up Apache 2.2 as a reverse proxy with the following configuration: # jBoss proxying ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /foo http://localhost:9080/foo ProxyPassReverse /foo http://localhost:9080/foo ProxyPassReverseCookiePath /foo /foo # Reverse proxy caching CacheEnable disk /foo # Compression SetOutputFilter DEFLATE BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE\s(7|8) !no-gzip !gzip-only-text/html DeflateCompressionLevel 9 Header append Vary User-Agent env=!dont-vary However, in a number of cases where I expect a cached response to be returned the request is sent through to the origin server at localhost:9080. Responses have a HTTP Vary header of 'Accept-Encoding,User-Agent' which is to be expected given the mod_deflate configuration. How can I determine why Apache is unable to serve a response from the cache?

    Read the article

  • Host couldn't be reached by domain name, only by IP: Apache's fault?

    - by MaxArt
    I have this Windows Server 2003 R2 32 bit machine running Apache 2.4.2 with OpenSSL 1.0.1c and PHP 5.4.5 via mod_fcgid 2.3.7. This config worked just fine for some hours, but then the site couldn't be reached with its domain name, say www.example.com, but it could be still reached by its IP address. In particular, while https://www.example.com/ yielded a connection error, http://123.1.2.3/ worked just fine. Yes, first https then http. Error and access logs were clean, i.e. they showed no signs of problems. Just the usual messages, that were interrupted while the site couldn't be reached. After some investigation, a simple restart of Apache solved the problem. Unfortunately, I didn't have the chance to test if https://123.1.2.3/ worked as well, or if http://www.example.com/ was still redirected to https as usual. So, has anyone have any idea of what happened? Before I get tired of Apache and ditch it in favor of Nginx? Edit: Some log informations. The last line of sslerror.log is from 90 minutes before the problem occurred, so I guess it's not important. ssl_request.log shows nothing interesting, too: these are the last two lines before the problem: [28/Aug/2012:17:47:54 +0200] x.x.x.x TLSv1.1 ECDHE-RSA-AES256-SHA "GET /login HTTP/1.1" 1183 [28/Aug/2012:17:47:45 +0200] y.y.y.y TLSv1 ECDHE-RSA-AES256-SHA "POST /upf HTTP/1.1" 73 The previous lines are all the same and don't seem interesting, except 4 lines like these 30-40 seconds before the problem: [28/Aug/2012:17:47:14 +0200] z.z.z.z TLSv1 ECDHE-RSA-AES256-SHA "-" - These are the corrisponding lines from sslaccess.log: z.z.z.z - - [28/Aug/2012:17:47:14 +0200] "-" 408 - ... x.x.x.x - - [28/Aug/2012:17:47:54 +0200] "GET /login HTTP/1.1" 200 1183 y.y.y.y - - [28/Aug/2012:17:47:45 +0200] "POST /upf HTTP/1.1" 200 73

    Read the article

  • Apache SSL reverse proxy to a Embed Tomcat

    - by ggarcia24
    I'm trying to put in place a reverse proxy for an application that is running a tomcat embed server over SSL. The application needs to run over SSL on the port 9002 so I have no way of "disabling SSL" for this app. The current setup schema looks like this: [192.168.0.10:443 - Apache with mod_proxy] --> [192.168.0.10:9002 - Tomcat App] After googling on how to make such a setup (and testing) I came across this: https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/861137 Which lead to make my current configuration (to try to emulate the --secure-protocol=sslv3 option of wget) /etc/apache2/sites/enabled/default-ssl: <VirtualHost _default_:443> SSLEngine On SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key SSLProxyEngine On SSLProxyProtocol SSLv3 SSLProxyCipherSuite SSLv3 ProxyPass /test/ https://192.168.0.10:9002/ ProxyPassReverse /test/ https://192.168.0.10:9002/ LogLevel debug ErrorLog /var/log/apache2/error-ssl.log CustomLog /var/log/apache2/access-ssl.log combined </VirtualHost> The thing is that the error log is showing error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol Complete request log: [Wed Mar 13 20:05:57 2013] [debug] mod_proxy.c(1020): Running scheme https handler (attempt 0) [Wed Mar 13 20:05:57 2013] [debug] mod_proxy_http.c(1973): proxy: HTTP: serving URL https://192.168.0.10:9002/ [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2011): proxy: HTTPS: has acquired connection for (192.168.0.10) [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2067): proxy: connecting https://192.168.0.10:9002/ to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2193): proxy: connected / to 192.168.0.10:9002 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2444): proxy: HTTPS: fam 2 socket created to connect to 192.168.0.10 [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2576): proxy: HTTPS: connection complete to 192.168.0.10:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection to child 0 established (server demo1agrubu01.demo.lab:443) [Wed Mar 13 20:05:57 2013] [info] Seeding PRNG with 656 bytes of entropy [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1866): OpenSSL: Handshake: start [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: before/connect initialization [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: unknown state [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1897): OpenSSL: read 7/7 bytes from BIO#7f122800a100 [mem: 7f1230018f60] (BIO dump follows) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1830): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1869): | 0000: 15 03 01 00 02 02 50 ......P | [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_io.c(1875): +-------------------------------------------------------------------------+ [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1903): OpenSSL: Exit: error in unknown state [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] SSL Proxy connect failed [Wed Mar 13 20:05:57 2013] [info] SSL Library Error: 336032002 error:14077102:SSL routines:SSL23_GET_SERVER_HELLO:unsupported protocol [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 0 with abortive shutdown (server example1.domain.tld:443) [Wed Mar 13 20:05:57 2013] [error] (502)Unknown error 502: proxy: pass request body failed to 172.31.4.13:9002 (192.168.0.10) [Wed Mar 13 20:05:57 2013] [error] [client 192.168.0.10] proxy: Error during SSL Handshake with remote server returned by /dsfe/ [Wed Mar 13 20:05:57 2013] [error] proxy: pass request body failed to 192.168.0.10:9002 (172.31.4.13) from 172.31.4.13 () [Wed Mar 13 20:05:57 2013] [debug] proxy_util.c(2029): proxy: HTTPS: has released connection for (172.31.4.13) [Wed Mar 13 20:05:57 2013] [debug] ssl_engine_kernel.c(1884): OpenSSL: Write: SSL negotiation finished successfully [Wed Mar 13 20:05:57 2013] [info] [client 192.168.0.10] Connection closed to child 6 with standard shutdown (server example1.domain.tld:443) If I do a wget --secure-protocol=sslv3 --no-check-certificate https://192.168.0.10:9002/ it works perfectly, but from apache is not working. I'm on an Ubuntu Server with the latest updates running apache2 with mod_proxy and mod_ssl enabled: ~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" ~# dpkg -s apache2 ... Version: 2.2.22-1ubuntu1.2 ... ~# dpkg -s openssl ... Version: 1.0.1-4ubuntu5.7 ... Hope that anyone may help

    Read the article

  • How do I remove the ServerSignature added by mod_fcgid?

    - by matthew
    I'm running Mod_Security and I'm using the SecServerSignature to customize the Server header that Apache returns. This part works fine, however I'm also running mod_fcgid which appends "mod_fcgid/2.3.5" to the header. Is there any way I can turn this off? Setting ServerSignature off doesn't do anything. I was able to get it to go away by changing the ServerTokens but that removed the customization I had added.

    Read the article

  • DIY Touch Screen Mod Makes Regular Gloves Smartphone-friendly

    - by Jason Fitzpatrick
    Smartphone-friendly winter gloves are expensive (and often ugly). Skip shelling out for store-bought gloves when, armed with a needle and thread, you can turn any gloves into smartphone-friendly ones. Over at Popular Science, Taylor Kubota shares the simple trick: 1. Order silver-plated nylon thread (silver conducts electricity). This can be difficult to find in stores, but major online retailers carry it. 2. Pick a pair of gloves to modify. Although leather works, it’s harder to push a needle through. 3. Stitch the figure of a star or other solid shape onto the glove’s index finger with the thread, making sure it will contact both the touchscreen and your skin. Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus?

    Read the article

  • Apache mod_proxy

    - by mhouston100
    Uggh, I'm spewing that I can't figure this out, I'm so frustrated: <VirtualHost *:80> servername domain1.com.au ServerAdmin webmaster@localhost DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined <Proxy *> Order Allow,Deny Allow from all </Proxy> RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] </VirtualHost> <VirtualHost *:443> servername domain1.com.au SSLEngine on SSLCertificateFile /etc/apache2/ssl/owncloud.pem SSLCertificateKeyFile /etc/apache2/ssl/owncloud.key DocumentRoot /var/www/html </VirtualHost> <VirtualHost *:*> Servername domain2.com.au ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / https://192.168.1.12/ ProxyPassReverse / https://192.168.1.12/ </VirtualHost> Not sure if it's clear what I'm trying to do, but I've read and read and READ, I still can't figure it out. Basically I have a working Apache server with a rewrite to force HTTPS, as seen in the first two VirtualHost entries. I now have a webmail service I set up on another server, under another domain name, however I only have one incoming public IP address. So I'm trying to have any incoming requests for the second domain to be proxied to the other server to access the webmail, whether its port 80 or 443. IMAP and POP3 are no problems, I can just forward the ports directly to the correct server. The results of the above configuration is that requests to domain2.com.au (port 80 or 443) are forwarded to https://domain1.com.au. Am I headed in the right direction?

    Read the article

  • Changing the current URL but serving content from another (same domain) - ProxyPass?

    - by zigojacko
    I've been banging my head against the wall with this for months now so I hope someone on here will be able to finally advise what is needed for this. I have some URL's like this:- domain.com/category/subcat/filter/brand And I wish to rewrite the URL's to:- domain.com/category/brand-subcat Content loads fine at the first URL, I just want to show it at a different URL - is URL masking the correct term for this? I have a RewriteRule in .htaccess that should do this job as far as I believe:- RewriteRule ^([a-zA-Z]+)/([a-zA-Z]+)/filter/([a-zA-Z]+)$ $1/$3-$2 This isn't actually modifying the URL at all though on a Magento website (mod_rewrite is enabled and plenty of other rewrites are working from the same .htaccess). So firstly, I want to know is what I am trying to achieve definitely possible? If so, what is this process even called? Secondly, does this need to be handled using ProxyPass and then use a [P] flag with the rewrite rule? I assume the Apache server doesn't have mod_proxy enabled currently because when I add a [P] flag, the URL returns a 403 forbidden error with the full server path for the current URL. Please could anyone kindly advise what on earth I need to do to achieve this?

    Read the article

  • How to Exclude an URL for Apache Mod_proxy?

    - by Mughil
    We have two Apache server as front-end and 4 tomcat server as back-end configured using mod_proxy module as load balancer. Now, we want to exclude an single tomcat url from the mod_proxy load balancer. Is there any way or rule to exclude? Proxy Balancer Setting: <Proxy balancer://backend-cluster1> BalancerMember http://10.0.0.1:8080 loadfactor=1 route=test1 retry=10 BalancerMember http://10.0.0.2:8080 loadfactor=1 route=test2 retry=10 </Proxy>

    Read the article

  • RewriteRule causes POST data to get dumped before I can access it

    - by MatthewMcGovern
    I'm currently setting up my own 'webserver' (a Ubuntu Server on some old hardware) so I can have a mess around with PHP and get some experience managing a server. I'm using my own little MVC framework and I've hit a snag... In order for all requests to make it through the dispatcher, I am using: <Directory /var/www/> RewriteEngine On RewriteCond %{REQUEST_URI} !\.(png|jpg|jpeg|bmp|gif|css|js)$ [NC] RewriteRule . HomeProjects/index.php [L] </Directory> Which works great. I read on Stackoverflow to change the [L] to [P] to preserve post data. However, this causes every page to return: Not Found The requested URL <url> was not found on this server. So after some more searching, I found, "Note that you need to enable the proxy module, and the proxy_http_module in the config files for this to work." The problem is, I have no idea how to do this and everything I google has people using examples with virtual hosts and I don't know how to 'translate' that into something useful for my setup. I'm accessing my webserver via my public IP and forwarding traffic on port 80 to the web server (like I'm pretending I have a domain/server). How can I get this enabled/get post data working again? Edit: When I use the following, the server never responds and the page loads indefinately? LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so <Directory /var/www/> RewriteEngine On RewriteCond %{HTTP_REFERER} !^http://(.+\.)?82\.6\.150\.51/ [NC] RewriteRule .*\.(jpe?g|gif|bmp|png|jpg)$ /no-hotlink.png [L] RewriteCond %{REQUEST_URI} !\.(png|jpg|jpeg|bmp|gif|css|js)$ [NC] RewriteRule . HomeProjects/index.php [P] </Directory>

    Read the article

  • 502 errors with apache mod_proxy hot standby (or equivalent)

    - by 6million
    Anyone knows how to configure the hot standby (+H) mod_proxy feature so that the takeover occurs immediately (without even one user receiving a 502) error during a shutdown? We aren't looking for real load-balancing, we just want a secondary server to take over while we shutdown the primary. The problem is that whenever the primary goes down, I'm able to slip one invalid request resulting in a 502 HTTP error reaching the end user,before the secondary actually takes over. Listen 80 <VirtualHost 127.0.0.1:80> ServerName domain.com ProxyPass / balancer://balance/ <Proxy balancer://balance/> BalancerMember http://primary_ip:80 BalancerMember http://secondary_ip:80 status=+H </Proxy> </VirtualHost>

    Read the article

  • Apache forwarding without redirecting (application won't follow redirects)

    - by DrewVS
    Recently we had to move /task to /public/task, and I'd like to configure Apache to redirect accordingly. However, using mod_rewrite, though it works in the browser, seems to break applications making api calls to the above location. What happens is the application returns a page with the message saying the page was moved, but the app doesn't follow the redirect. So, is there a way to simply forward any traffic to /task to /public/task without 'redirecting', i.e, returning a redirect status code? EDIT: Here's a little more information. I've found a simple test to clarify what I'm trying to fix. Here is the URL path that needs forwarding: https://mydomain.com/task Needs to go to: https://mydomain.com/public/task If I use curl against the original domain, it just returns a redirect page notice. If I add the -L flag, which tells curl to follow redirects, it then follows the redirect successfully. I assume something very similar is happening in the application (which I don't have access to) that makes calls to the /task URL path. Since I cannot modify the application to make it follow redirects properly, I'm looking for a solution I can implement in Apache.

    Read the article

  • Glassfish JSF/EAR Apache 2.2 proxy_ajp_mod Referred Content Missing (images/links/etc)

    - by BillR
    Full disclosure: Since this seems to be more of a configuration issue, I deleted this from Stack (where it wasn't getting any response) and reposted here. The problem is how to change the requestContextPath served up by Glassfish behind mod_proxy_ajp. The site/app runs fine if connecting directly to Glassfish port 8080 which is ultimately not what I want to do. So I need help with configuration for my servers and jsf deployment. I can see the issue but don't know how to resolve it. It has to do with the requestContextPath. Simply put, Apache directs to http://mysite.com/welcome.xhtml which is correct and what I want, but the page is minus the images and styles. The issue is Glassfish itself is still pointing to http://mysite.com/myapp/*. So all links it serves in the app/site still refer via the requestContextPath. That is the /myapp/* part of http://mysite.com/myapp/welcome.xhtml. When I look in the page source, images which are referred to with relative links still point to the requestContextPath (that is, /myapp/). This is fixable but a real pain. However with page links I can't set the relative path. If I hover over the contact page link I see http://mysite.com/myapp/contact.xhtml, and if I click it, I get 404. You can see the /myapp/ context path in the page source as well. If I type in the URL http://mysite.com/contact.xhtml I get the page minus its referred links (requestContextPath). On Apache ProxyPass / ajp://littlewalterserver:8009/myapp-web/ ProxyPassReverse / ajp://littlewalterserver:8009/myapp_Project-web On Glassfish asadmin create-network-listener --listenerport 8009 --protocol http-listener-1 --jkenabled true jk-connector I have tried going in to Glassfish and setting the web app as the default web app. I have changed the / in glassfish-web.xml (and checked to make sure it was the same in the EAR file). How can I get Glassfish to not include the /myapp/ context in the URLs? This has to be easy if you know how, but I don't know how, can someone help out here? Thanks.

    Read the article

  • new ActiveXObject('Word.Application') creates new winword.exe process when IE security does not allo

    - by Mark Ott
    We are using MS Word as a spell checker for a few fields on a private company web site, and when IE security settings are correct it works well. (Zone for the site set to Trusted, and trusted zone modified to allow control to run without prompting.) The script we are using creates a word object and closes it afterward. While the object exists, a winword.exe process runs, but it is destroyed when the word object is closed. If our site is not set in the trusted zone (Internet zone with default security level) the call that creates the word object fails as expected, but the winword.exe process is still created. I do not have any way to interact with this process in the script, so the process stays around until the user logs off (users have no way to manually destroy the process, and it wouldn't be a good solution even if they did.) The call that attempts to create the object is... try { oWordApplication = new ActiveXObject('Word.Application'); } catch(error) { // irrelevant code removed, described in comments.. // notify user spell check cannot be used // disable spell check option } So every time the page is loaded this code may be run again, creating yet another orphan winword.exe process. oWordApplication is, of course, undefined in the catch block. I would like to be able to detect the browser security settings beforehand, but I have done some searching on this and do not think that it is possible. Management here is happy with it as it is. As long as IE security is set correctly it works, and it works well for our purposes. (We may eventually look at other options for spell check functionality, but this was quick, inexpensive, and does everything we need it to do.) This last problem bugs me and I'd like to do something about it, but I'm out of ideas and I have other things that are more in need of my attention. Before I put it aside, I thought I'd ask for suggestions here...

    Read the article

  • Understanding LinkDemand Security on a webserver

    - by robertpnl
    Hi, After deployment an ASP.Net application on a webserver, I get this error message by using code from a external assembly: "LinkDemand The type of the first permission that failed was: System.Security.PermissionSet The Zone of the assembly that failed was: MyComputer the error ". The assembly is include in the \bin folder and not in the GAC. I try to know what linkdemand exactly is and why this message will raised. But looking for more information, I don't get exactly the problem. I try also to add the PermissionSetAttribute on the class where the exception message happens: [System.Security.Permissions.PermissionSetAttribute(System.Security.Permissions.SecurityAction.LinkDemand, Name = "FullTrust")] Then the exception will be raised on another class of the assembly. And so on.. My questions ares: - what exactly is going wrong here? Is it true that I understand that .Net cannot check the code during Jit? - Is there maybe a security policy that block this (machine.config)? - Can I set the PermissionAttribute for all classes between a assembly? Thanks.

    Read the article

  • Spring HandlerInterceptor or Spring Security to protect resource

    - by richever
    I've got a basic Spring Security 3 set up using my own login page. My configuration is below. I have the login and sign up page accessible to all as well as most everything else. I'm new to Spring Security and understand that if a user is trying to access a protected resource they will be taken to the defined login page. And upon successful login they are taken to some other page, home in my case. I want to keep the latter behavior; however, I'd like specify that if a user tries to access certain resources they are taken to the sign up page, not the login page. Currently, in my annotated controllers I check the security context to see if the user is logged in and if not I redirect them to the sign up page. I only do this currently with two urls and no others. This seemed redundant so I tried creating a HandlerInterceptor to redirect for these requests but realized that with annotations, you can't specify specific requests to be handled - they all are. So I'm wondering if there is some way to implement this type of specific url handling in Spring Security, or is going the HandlerInterceptor route my only option? Thanks! <http auto-config="true" use-expressions="true"> <intercept-url pattern="/login*" access="permitAll"/> <intercept-url pattern="/signup*" access="permitAll"/> <intercept-url pattern="/static/**" filters="none" /> <intercept-url pattern="/" access="permitAll"/> <form-login login-page="/login" default-target-url="/home"/> <logout logout-success-url="/home"/> <anonymous/> <remember-me/> </http>

    Read the article

  • Spring Security 3.0 - Intercept-URL - All pages require authentication but one

    - by gav
    Hi All, I want any user to be able to submit their name to a volunteer form but only administrators to be able to view any other URL. Unfortunately I don't seem to be able to get this correct. My resources.xml are as follows; <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <http realm = "BumBumTrain Personnel list requires you to login" auto-config="true" use-expressions="true"> <http-basic/> <intercept-url pattern="/person/volunteer*" access=""/> <intercept-url pattern="/**" access="isAuthenticated()" /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider> <user-service> <user name="admin" password="admin" authorities="ROLE_ADMIN"/> </user-service> </authentication-provider> </authentication-manager> </beans:beans> Specifically I am trying to achieve the access settings I described via; <intercept-url pattern="/person/volunteer*" access=""/> <intercept-url pattern="/**" access="isAuthenticated()" /> Could someone please describe how to use intercept-url to achieve the outcome I've described? Thanks Gav

    Read the article

  • CakePHP 1.26: Bug in 'Security' component?

    - by Steve
    Okay, for those of you who may have read this earlier, I've done a little research and completely revamped my question. I've been having a problem where my form requests get blackholed by the Security component, although everything works fine when the Security component is disabled. I've traced it down to a single line in a form: <?php echo $form->create('Audition');?> <fieldset> <legend><?php __('Edit Audition');?></legend> <?php echo $form->input('ensemble'); echo $form->input('position'); echo $form->input('aud_date'); // The following line works fine... echo $form->input('owner'); // ...but the following line blackholes when Security included // and the form is submitted: // echo $form->input('owner', array('disabled'=>'disabled'); ?> </fieldset> <?php echo $form->end('Submit');?> (I've commented out the offending line for clarity) I think I'm following the rules by using the form helper; as far as I can tell, this is a bug in the Security component, but I'm too much of a CakePHP n00b to know for sure. I'd love to get some feedback, and if it's a real bug, I'll submit it to the CakePHP team. I'd also love to know if I'm just being dumb and missing something obvious here.

    Read the article

  • SOAP security in Salesforce

    - by Dean Barnes
    I am trying to change the wsdl2apex code for a web service call header that currently looks like this: <env:Header> <Security xmlns="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd"> <UsernameToken Id="UsernameToken-4"> <Username>test</Username> <Password>test</Password> </UsernameToken> </Security> </env:Header> to look like this: <soapenv:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-4" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>Test</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">Test</wsse:Password> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> One problem is that I can't work out how to change the namespaces for elements (or even if it matters what name they have). A secondary problem is putting the Type attribute onto the Password element. Can any provide any information that might help? Thanks

    Read the article

  • Automatically check for Security Updates on CentOS or Scientific Linux?

    - by Stefan Lasiewski
    We have machines running RedHat-based distros such as CentOS or Scientific Linux. We want the systems to automatically notify us if there are any known vulnerabilities to the installed packages. FreeBSD does this with the ports-mgmt/portaudit port. RedHat provides yum-plugin-security, which can check for vulnerabilities by their Bugzilla ID, CVE ID or advisory ID. In addition, Fedora recently started to support yum-plugin-security. I believe this was added in Fedora 16. Scientific Linux 6 did not support yum-plugin-security as of late 2011. It does ship with /etc/cron.daily/yum-autoupdate, which updates RPMs daily. I don't think this handles Security Updates only, however. CentOS does not support yum-plugin-security. I monitor the CentOS and Scientific Linux mailinglists for updates, but this is tedious and I want something which can be automated. For those of us who maintain CentOS and SL systems, are there any tools which can: Automatically (Progamatically, via cron) inform us if there are known vulnerabilities with my current RPMs. Optionally, automatically install the minimum upgrade required to address a security vulnerability, which would probably be yum update-minimal --security on the commandline? I have considered using yum-plugin-changelog to print out the changelog for each package, and then parse the output for certain strings. Are there any tools which do this already?

    Read the article

  • java.security.AccessControlException: access denied using Java Web Start

    - by killiancomputers
    I am having some issues with accessing files using JWS (Java Web Start). The program adds a new label and image. The program runs fine on my local computer but gives me pages of errors when I run the program on my remote server using JWS. Here's a sample of the error: Exception in thread "AWT-EventQueue-0" java.security.AccessControlException: access denied (java.io.FilePermission add2.png read) at java.security.AccessControlContext.checkPermission(Unknown Source) at java.security.AccessController.checkPermission(Unknown Source) at java.lang.SecurityManager.checkPermission(Unknown Source) This occurs even after making sure the images have read access. Ideas?

    Read the article

  • problem with overriding autologin in spring security?

    - by sword101
    greetings everybody iam using spring security 3 remember me service as follows <http> <remember-me/> ....</http> and i want to perform some logic in the autologin so i tried to override the AbstractRememberMeServices as follows: package com.foo; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.springframework.security.core.Authentication; import org.springframework.security.web.authentication.RememberMeServices; public abstract class AbstractRememberMeServices implements RememberMeServices{ @Override public Authentication autoLogin(HttpServletRequest arg0, HttpServletResponse arg1) { System.out.println("Auto Login"); return null; } @Override public void loginSuccess(HttpServletRequest arg0, HttpServletResponse arg1, Authentication arg2) { System.out.println("Login Success"); } } but the autologin occurs with no action,the user auto login but the print statement is not printed? what's wrong?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >