Search Results

Search found 129402 results on 5177 pages for 'http status code 500'.

Page 10/5177 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • 500 Internal Server Error after changing .NET Framework Version to 4.0 in IIS7

    - by René
    I just changed my .NET Framework Version of the Application Pools in IIS7 Manager, following these instructions. Now when I try to re-upload my ASP.Net page, it shows me a 500 - Internal server error. I have tried uploading it in .net 2.0(X86, X64, AnyCPU), and 4.0(X86, X64, AnyCPU), and everything gives the same error. This is all the details the error gives me: "There is a problem with the resource you are looking for, and it cannot be displayed." When keeping the .NET version on 2.0 on the server, it works just fine. Also, when uploading "index.htm", it works fine as well, it just shows the HTML page. This is on Windows Server 2008 R2, by the way. EDIT: I have finally found out how to get the error details. Here they are: "Handler "PageHandlerFactory-Integrated" has a bad module "ManagedPipelineHandler" in its module list." "Most likely causes: •Managed handler is used; however, ASP.NET is not installed or is not installed completely. •There is a typographical error in the configuration for the handler module list. Things you can try: •Install ASP.NET if you are using managed handler. •Ensure that the handler module's name is specified correctly. Module names are case-sensitive and use the format modules="StaticFileModule,DefaultDocumentModule,DirectoryListingModule"." I am sure that I have installed ASP.NET completely. Please help me, -René

    Read the article

  • LDAP Authentication fails with 500 or 401 depending on bind for Apache2

    - by Erik
    I'm setting up LDAP authentication for our Subversion repository hosted through Apache on a RHEL 5 system. I run into two different issues when I try to authenticate against Active Directory. <Location /svn/> Dav svn SvnParentPath /srv/subversion SVNListParentPath On AuthType Basic AuthName "Subversion Repository" AuthBasicProvider ldap AuthLDAPBindDN "cn=userfoo,ou=Service Accounts,ou=User Accounts,dc=my,dc=example,dc=com" AuthLDAPBindPassword "mypass" AuthLDAPUrl "ldap://my.example.com:389/ou=User Accounts,dc=my,dc=example,dc=com?sAMAccountName?sub?(objectClass=user)" NONE Require valid-user </Location> If I use the above configuration it continually prompts me with the Basic prompt and I have to eventually select Cancel, which returns a 401 (Authorization Required). If I comment out the bind parts it returns 500 (Internal Server Error), griping that authentication failed: [Mon Nov 02 12:00:00 2009] [warn] [client x.x.x.x] [10744] auth_ldap authenticate: user myuser authentication failed; URI /svn [ldap_search_ext_s() for user failed][Operations error] When I perform the bind using ldapsearch and filter for a simple attribute it returns correctly: ldapsearch -h my.example.com -p 389 -D "cn=userfoo,ou=Service Accounts,ou=User Accounts,dc=my,dc=example,dc=com" -b "ou=User Accounts,dc=my,dc=example,dc=com" -w - "&(objectClass=user)(cn=myuser)" sAMAccountName Unfortunately I have no control or insight into the AD part of the system, only the RHEL server. Does anyone know what the hang up is here?

    Read the article

  • Apache showing 500 error during Active Directory LDAP authentication

    - by Tyllyn
    I have Apache (on Windows Server) set up to authenticate one directory through Active Directory. Config settings are as follows: <LocationMatch "/trac/[^/]+/login"> Order deny,allow Allow from all AuthBasicProvider ldap AuthzLDAPAuthoritative Off AuthLDAPURL ldap://<ip-redacted>:3268/cn=Users,OU=MyBusiness,DC=<dc-redacted>,DC=local?sAMAccountName?sub?(objectClass=*) AuthLDAPBindDN trac@<dc-redacted>.local AuthLDAPBindPassword "<password-redacted>" AuthType Basic AuthName "Protected" require valid-user </LocationMatch> Watching, Wireshark, I see the following get sent through when I visit the page: To the AD server: bindRequest(1) "trac@<dc-redacted>.local" simple And from the AD server: bindResponse(1) success I'm assuming this means that the auth was successful... but Apache doesn't think so. It returns a 500 server to me. Apache logs show the following: [Thu Nov 18 16:21:12 2010] [debug] mod_authnz_ldap.c(379): [client 192.168.x.x] [7352] auth_ldap authenticate: using URL ldap://<ip-redacted>:3268/cn=Users,OU=MyBusiness,DC=<dc-redacted>,DC=local?sAMAccountName?sub?(objectClass=*), referer: http://192.168.x.x/trac/Trac/login [Thu Nov 18 16:21:12 2010] [info] [client 192.168.x.x] [7352] auth_ldap authenticate: user authentication failed; URI /trac/Trac/login [ldap_search_ext_s() for user failed][Filter Error], referer: http://192.168.x.x/trac/Trac/login Now, that log file shows a failed auth for a blank user. I am confused. Any idea what I am doing wrong... and how I can get the Apache authentication working? :) Thanks!

    Read the article

  • Android webbrowser returns code 500 for webpage on Nginx webserver

    - by Paxxil
    Hey! I've come to a very weird behavior of a web browser on android mobile phone (I've tried HTC Wildfire and HTC Desire phones). I have a web server with Nginx v0.8.54. When i try to open a web page on the phone it shows me error: The requested item could not be loaded! (Status code: 500) BUT it only happens when I am requesting page through Mobile network. On Wifi it works just fine .... but there is more .... if I stop Nginx and start Apache web server it works just fine on both Mobile network and wifi. I've also tried other mobile network and it is the same behavior. Some server stats: Firewall is OFF Selinux is OFF the web page (using Nginx web server) opens normally on any other browser (IE, FF, Opera, Chrome, Safari) on the laptop or PC Nothing in nginx error.log This is the only entry in access.log when the page is requested: xxx.xxx.xxx.xxx - - [17/Mar/2011:11:19:49 -0500] 200 "GET / HTTP/1.1" 27405 "-" "Mozilla/5.0 (Linux; U; Android 2.2; en-gb; Desire_A8181 Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" "-" index.html has only "Hello World" string in it. There is no fishy javascript or anything else. .... but there is even more.... if i open the same page on another server, with the same Nginx build, with the same server and web server configuration.... it opens just fine. if anyone has any idea on what may be going on, i would really appreciate it if you let me know. Thanks! EDIT: i forgot to mention that page opens OK on Iphone and Nokia

    Read the article

  • mod_fcgi produces random 500 Errors

    - by DmitrySemenov
    php 5.4.7 via mod_fcgi when I run the site sometimes it works, sometimes it crashed with 500 Internal Error, this is what I see in error.log everytime I run the script [Mon Sep 24 18:50:43 2012] [warn] [client 68.231.194.198] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Mon Sep 24 18:50:43 2012] [error] [client 68.231.194.198] Premature end of script headers: api.php any ideas? vhost config: <VirtualHost :80> ServerAdmin [email protected] DocumentRoot "/home/www/sites/test.com/html/development" ServerName test.com ServerAlias www.test.com ErrorLog "/home/www/sites/test.com/logs/error_log" CustomLog "/home/www/sites/test.com/logs/access_log" common <IfModule mod_fcgid.c> <Directory /home/www/sites/test.com/html/development> Options +ExecCGI AllowOverride All AddHandler fcgid-script .php FCGIWrapper /home/www/php-fcgi-scripts/php-fcgi-starter .php Order allow,deny Allow from all </Directory> FcgidMaxRequestLen 1073741824 </VirtualHost> fcgi.d conf LoadModule fcgid_module modules/mod_fcgid.so # Use FastCGI to process .fcg .fcgi & .fpl scripts AddHandler fcgid-script fcg fcgi fpl # Sane place to put sockets and shared memory file FcgidIPCDir /var/run/mod_fcgid FcgidProcessTableFile /var/run/mod_fcgid/fcgid_shm IdleTimeout 300 BusyTimeout 300 ProcessLifeTime 7200 IPCConnectTimeout 300 IPCCommTimeout 7200 PHP_Fix_Pathinfo_Enable 1 php-fcgi-starter.php #!/bin/sh PHP_CGI=/usr/local/php547/bin/php-cgi PHP_INI=/etc/php547-fastcgi.ini export PHP_FCGI_TIMEOUT=1200 #export PHP_FCGI_CHILDREN=6 export PHP_FCGI_MAX_REQUESTS=1000 exec $PHP_CGI -c $PHP_INI

    Read the article

  • Cisco ASA site-to-site vpn not initiating phase 1 (not sending udp 500 packets)

    - by Sean Steadman
    I am hoping someone here can help me with my problem. I am trying to setup an IPSEC site-to-site VPN between two cisco ASA 5520's in GNS3 (both using 8.4.2). I have been unsuccesful in getting the tunnel up and it appears neither ASA is sending packets out,in regards to phase 1 and phase 2 (tested by using wireshark and seeing NO udp 500 packets). Doing show ipsec sa and such shows nothing. CALIFORNIA(config)# show ipsec sa There are no ipsec sas FLA-ASA# show ipsec sa There are no ipsec sas I will attach both configurations in two different pastebin files as to keep this post a bit cleaner. Essentially California side has 172.20.1.0/24 and Florida side has 10.10.10.0/24. California ASA config: http://pastebin.com/v0pngYzF Florida ASA config: http://pastebin.com/E2geybta Please let me know if there is any other vital information that could help. I have gotten IPSEC tunnels to work using openSwan (linux) and cisco routers but cannot for the life of me get ASA IPSEC tunnels to work. The ASDM is out of the question I only use cli. Thanks for any useful help!

    Read the article

  • PhpMyAdmin 500 Internal Server Error on Nginx/php5-fpm/Debian

    - by ThrownAway
    I downloaded PhpMyAdmin a while ago and am having a hard time getting it to work. Requesting localhost/phpmyadmin gives a 500 Internal Server Error response, but there's nothing in the error log. These are the steps I did: Downloaded the newest phpmyadmin and unzipped all the files to /var/vhosts/phpmyadmin/www/ Created a new php5-fpm pool and a server block on nginx Changed the owner of all the files inside phpmyadmin/ Tried requesting localhost/phpmyadmin and localhost/phpmyadmin/setup The phpmyadmin is running inside a chroot, and all the files are owned by www-data so it shouldn't be a permission error. I made a new php file in the same directory to produce an error and it logs just fine so it has to be just phpmyadmin. Here's my php5-fpm pool: [phpmyadmin] listen = /var/vhosts/phpmyadmin/tmp/.php.sock; user = www-data group = www-data chroot = /var/vhosts/phpmyadmin/ chdir = / php_admin_value[error_reporting] = E_ALL php_admin_value[error_log] = error.log php_admin_flag[log_errors] = on php_admin_flag[display_errors] = on php_value[session.save_handler] = files php_value[session.save_path] = /tmp And Nginx server block: server { listen 80; root /var/vhosts/phpmyadmin/www; server_name pma.domain; location / { try_files $uri $uri/ /index.html; autoindex on; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_pass unix:/var/vhosts/phpmyadmin/tmp/.php.sock; fastcgi_param SCRIPT_FILENAME /www$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /www; } index index.html index.htm index.php; try_files $uri $uri/ =404; } Any ideas what could be wrong? Why is it not producing any errors even though I've forced them to be on?

    Read the article

  • "Permission denied" & 500 Internal Server error serving PHP in Mac OS X

    - by Abhic
    I just set up Web Sharing in Mac OS X 10.6.2 Snow Leopard. My httpd.conf allows for ExecCGI and all the folders and files are readable by everybody and even writable by me. I put a simple Hello World in index.php in my base site and yet my apache error log shows this: [Thu Mar 18 00:17:18 2010] [error] [client 192.168.11.135] (13)Permission denied: exec of '/Users/abhic/Sites/index.php' failed [Thu Mar 18 00:17:18 2010] [error] [client 192.168.11.135] Premature end of script headers: index.php <html> <head> <title> Hello </title> </head> <body> <?php echo "Hello"; ?> </body> </html> is what my index.php looks like placed in /Users/abhic/Sites My browser shows a 500 Internal Server Error Any help would do me good in the middle of the night. I have been trying to solve this for way too long. Thank you.

    Read the article

  • Amazon S3 Tips: Quickly Add/Modify HTTP Headers To All Files Recursively

    - by Gopinath
    Amazon S3 is an dead cheap cloud storage service that offers unlimited storage in pay as you use model. Recently we moved all the images and other static files(scripts & css) of Tech Dreams to Amazon S3 to reduce load on VPS server. Amazon S3 is cheap, but monthly bill will shoot up if images/static files of the blog are not cached properly (more details). By adding caching HTTP Headers Cache-Control or Expires to all the files hosted on Amazon S3 we reduced the monthly bills and also load time of blog pages. Lets see how to add custom headers to files stored on Amazon S3 service. Updating HTTP Headers of one file at a time The web interface of Amazon S3 Management console allows adding custom HTTP headers to one file at a time  through “Properties”  window (to access properties, right on a file and select Properties menu). So if you have to add headers to 100s of files then this is not the way to go! Updating HTTP Headers of multiple files of a folder recursively To update HTTP headers of multiple files in a folder recursively, we can use CloudBerry Explorer freeware or Bucket Explorer trail ware applications. CloudBerry is my favourite as it’s a freeware and also it’s has excellent interface to access Amazon S3 from desktops. Adding HTTP Headers with CloudBerry application is straight forward – right click on the required folders and choose the option “Set HTTP Headers”. Download CloudBerry Explorer This article titled,Amazon S3 Tips: Quickly Add/Modify HTTP Headers To All Files Recursively, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • When to use http status code 404

    - by Sybiam
    I am working on a project and after arguing with people at work for about more than a hour. I decided to know what people on stack-exchange might say. We're writing an API for a system, there is a query that should return a tree of Organization or a tree of Goals. The tree of Organization is the organization in which the user is present, In other words, this tree should always exists. In the organization, a tree of goal should be always present. (that's where the argument started). In case where the tree doesn't exist, my co-worker decided that it would be right to answer response with status code 200. And then started asking me to fix my code because the application was falling apart when there is no tree. I'll try to spare flames and fury. I suggested to raise a 404 error when there is no tree. It would at least let me know that something is wrong. When using 200, I have to add special check to my response in the success callback to handle errors. I'm expecting to receive an object, but I may actually receive an empty response because nothing is found. It sounds totally fair to mark the response as a 404. And then war started and I got the message that I didn't understand HTTP status code schema. So I'm here and asking what's wrong with 404 in this case? I even got the argument "It found nothing, so it's right to return 200". I believe that it's wrong since the tree should be always present. If we found nothing and we are expecting something, it should be a 404. Extra Also, I believe the best answer to the problem is to create default objects when organizations are created, having no tree shouldn't be a valid case and should be seen as an undefined behavior. There is no way an account can be used without both trees. For that reasons, they should be always present.

    Read the article

  • Preserving case in HTTP headers with Ruby's Net:HTTP

    - by emh
    Although the HTTP spec says that headers are case insensitive; Paypal, with their new adaptive payments API require their headers to be case-sensitive. Using the paypal adaptive payments extension for ActiveMerchant (http://github.com/lamp/paypal_adaptive_gateway) it seems that although the headers are set in all caps, they are sent in mixed case. Here is the code that sends the HTTP request: headers = { "X-PAYPAL-REQUEST-DATA-FORMAT" => "XML", "X-PAYPAL-RESPONSE-DATA-FORMAT" => "JSON", "X-PAYPAL-SECURITY-USERID" => @config[:login], "X-PAYPAL-SECURITY-PASSWORD" => @config[:password], "X-PAYPAL-SECURITY-SIGNATURE" => @config[:signature], "X-PAYPAL-APPLICATION-ID" => @config[:appid] } build_url action request = Net::HTTP::Post.new(@url.path) request.body = @xml headers.each_pair { |k,v| request[k] = v } request.content_type = 'text/xml' proxy = Net::HTTP::Proxy("127.0.0.1", "60723") server = proxy.new(@url.host, 443) server.use_ssl = true server.start { |http| http.request(request) }.body (i added the proxy line so i could see what was going on with Charles - http://www.charlesproxy.com/) When I look at the request headers in charles, this is what i see: X-Paypal-Application-Id ... X-Paypal-Security-Password... X-Paypal-Security-Signature ... X-Paypal-Security-Userid ... X-Paypal-Request-Data-Format XML X-Paypal-Response-Data-Format JSON Accept */* Content-Type text/xml Content-Length 522 Host svcs.sandbox.paypal.com I verified that it is not Charles doing the case conversion by running a similar request using curl. In that test the case was preserved.

    Read the article

  • Visual Studio code metrics misreporting lines of code

    - by Ian Newson
    The code metrics analyser in Visual Studio, as well as the code metrics power tool, report the number of lines of code in the TestMethod method of the following code as 8. At the most, I would expect it to report lines of code as 3. [TestClass] public class UnitTest1 { private void Test(out string str) { str = null; } [TestMethod] public void TestMethod() { var mock = new Mock<UnitTest1>(); string str; mock.Verify(m => m.Test(out str)); } } Can anyone explain why this is the case? Further info After a little more digging I've found that removing the out parameter from the Test method and updating the test code causes LOC to be reported as 2, which I believe is correct. The addition of out causes the jump, so it's not because of braces or attributes. Decompiling the DLL with dotPeek reveals a fair amount of additional code generated because of the out parameter which could be considered 8 LOC, but removing the parameter and decompiling also reveals generated code, which could be considered 5 LOC, so it's not simply a matter of VS counting compiler generated code (which I don't believe it should do anyway).

    Read the article

  • Bash edit file and keep last 500 lines

    - by Lizard
    I am looking to create a cron job that opens a directory loops through all the logs i have created and deletes all lines but keep the last 500 for example. I was thinking of something along the lines of tail -n 500 filename > filename Would this work? I also not sure how to loop through a directory in bash Thanks in advance.

    Read the article

  • .htaccess time on godaddy

    - by doug
    Hi there I'm trying to run a cakephp application on a godaddy linux account. The problem is that i get the error 500. I've read on cakephp discussion group that i have to edit the .htaccess file. 1) How much do i have to wait until i see the result? 2) More information about this error may be available in the server error log. Where are those servers log on a godaddy linux hosted account?

    Read the article

  • socket() failed: No buffer space available) while connecting to upstream,

    - by alfish
    On my ubuntu 10.04 VPS, I get regular 500 error on nginx (0.7.??)+ fcgi web server running a durpal site. and when I trace the nginx error log I see plenty of these: socket() failed: No buffer space available) while connecting to upstream ..., I have tried differnt combination configs but none fixed the problem. Currently I have 3 nginx workers, Keep-alive time out 15 seconds and and PHP_FCGI_CHILDREN=5 PHP_FCGI_MAX_REQUESTS=1000 I really appreciate if you Can you suggest a solution to this annoying problem.

    Read the article

  • Remove unwanted lines,dead code from source code?

    - by Passionate programmer
    How to make source code free of the following Remove dead codes that are more than few lines between /* c++ codes */ Change more than one line breaks to one Remove modified user name and date /*-------- MODIFICATION DONE by xyz on ------------*/ I have used a code formatter tool to get a nice formatted code but stuck with code with above items.Is there any way to make sure codes like above doesn't get in to svn and automatically formatted code gets into the source.

    Read the article

  • 500 Internal Server Error when setting up Apache on localhost

    - by Martin Hoe
    I downloaded and installed XAMPP, and to keep my projects nicely separated I want to create a VirtualHost for each one based on its future domain name. For example, in my first project (we'll say it's project.com) I've put this in my Apache configuration: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/sub/ ServerName sub.project.com ServerAdmin [email protected] </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/project/ ServerName project.com ServerAdmin [email protected] </VirtualHost> And this in my hosts file: # development 127.0.0.1 localhost 127.0.0.1 project.org 127.0.0.1 sub.project.org When I go to project.com in my browser, the project loads up successfully. Same if I go to sub.project.com. But, if I navigate to: http://project.com/register (one of my site pages) I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The error log shows this: [Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Any idea what config items I got wrong or how to get this working? It happens on any page that's not in in the root directory of project.com. Thanks.

    Read the article

  • Persistent Issues on small business network using Cisco 871W and Catalyst Express 500

    - by Ben Campbell
    Being the most qualified (read: still not qualified) to solve our persistant network issues, I've turned to serverfault for guidance. I've done some searching, reading related documentation on cisco.com and tried a bit of troubleshooting. Here is the config: 100mb synchronous connection from a business internet provider (tested multiple times at 100meg at the source) Cisco 871W wireless point & router is where the WAN connection starts (this serves all our wireless). The only wired connection in the 871W is the Catalyst switch listed below. Cisco Catalyst Express 500 (24TT) is where all the wired connections terminate. About 20 Windows workstations and servers (AD/Webservers only). Some services in EC2 including mail and other web servers/apps. I've been TOLD cabling internally should be gigabit-ready. Here are the problems: generally slow download rates from the internet to the desktop/laptop frequent "page cannot be displayed" errors in browsers-sometimes 3 or 4 reloads are necessary... often times CSS wont load or other content requiring the browser to connect to a different server. slow speed within the LAN from workstation to workstation copying files. I would expect extremely fast data transfer workstation to workstation / server to workstation in this simple network. Several things I need to admit: I'm not primarily a network guy. Funding is relatively low, I need to be the guy that finds the solution. I understand most of the terminology and most of the technology. Implementation is where I fail due to lack of experience. Getting to the point: I'm wondering whether experienced network admins think that our small network should be sufficiently served with our current hardware if configured properly... or if we should purchase new equipment and start fresh? If starting fresh is the plan, whatever that new equipment may be is a likely different question entirely. If I haven't provided enough information, I will happily do some troubleshooting and update with the results. I have experience using wireshark and some other tools. Please let me know what you think would be most helpful and thanks in advance. EDIT: I forgot to add that the Cisco applicance will not finish loading the SDM Express console. It hangs every time at the "populating modules... DHCP". It eventually crashes and closes. I've rebooted the hardware and this still happens.

    Read the article

  • CUPS Web Admin Error 500 Unknown

    - by Floyd Resler
    I keep getting a 500 Unknown error whenever I navigate off the home page of my CUPS web admin. I'm sure I have something misconfigured but I'm not sure what. Here's my configuration: # # "$Id: cupsd.conf.in 8805 2009-08-31 16:34:06Z mike $" # # Sample configuration file for the CUPS scheduler. See "man cupsd.conf" for a # complete description of this file. # # Log general information in error_log - change "warn" to "debug" # for troubleshooting... LogLevel warn # Administrator user group... SystemGroup lpadmin sys root # Only listen for connections from the local machine. Listen 192.168.6.101:631 Listen /var/run/cups/cups.sock ServerName 192.168.6.101 # Show shared printers on the local network. Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS BrowseAddress 192.168.6.255 # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... Order allow,deny Allow From All Allow From 127.0.0.1 # Restrict access to the admin pages... AuthType Default Require user @SYSTEM Order allow,deny Allow From All Allow From 127.0.0.1 # Restrict access to configuration files... AuthType Default Require user @SYSTEM Order allow,deny Allow From All Allow From 127.0.0.1 # Set the default printer/job policies... # Job-related operations must be done by the owner or an administrator... Require user @OWNER @SYSTEM Order deny,allow # All administration operations require an administrator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # All printer operations require a printer operator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # Only the owner or an administrator can cancel or authenticate a job... Require user @OWNER @SYSTEM Order deny,allow Order deny,allow # Set the authenticated printer/job policies... # Job-related operations must be done by the owner or an administrator... AuthType Default Order deny,allow AuthType Default Require user @OWNER @SYSTEM Order deny,allow # All administration operations require an administrator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # All printer operations require a printer operator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # Only the owner or an administrator can cancel or authenticate a job... AuthType Default Require user @OWNER @SYSTEM Order deny,allow Order deny,allow # # End of "$Id: cupsd.conf.in 8805 2009-08-31 16:34:06Z mike $". #

    Read the article

  • Pushing Large Files to 500+ Computers [closed]

    - by WMIF
    I work with a team to manage 500-600 rented Windows 7 computers for an annual conference. We have a large amount of data that needs to be synced to these computers, up to 1 TiB. The computers are divided into rooms and connected through unmanaged gigabit switches. We prepare these computers ahead of time with the Windows installation and configuration, plus any files that we have available to us before we send the base image in for replication by the rental company. Every year, we have presenters approach on site with up to gigs of data that need to be pushed to the room that they will be presenting in. Sometimes they only have a few files that are small sizes, such as a slide PDF, but can sometimes be much larger 5 GiB. Our current strategy for pushing these files is using batch scripts and RoboCopy. For the large pushes, we actually use a BitTorrent client to generate a torrent file, and then we use the batch-RoboCopy to push the torrent into a folder on the remote machines that is being monitored by an installed BT client. Often times, this data needs to be pushed immediately with a small time window. We have several machines in a control room that are identical to the machines on the floor that we use for these pushes. We occasionally have a need to execute a program on the remote machines, and we currently use batch and PSexec to handle this task. We would love to be able to respond to these last minute pushes with "sorry, your own fault", but it won't happen. The BT method has allowed us to have a much faster response time, but the whole batch process can get messy when there are multiple jobs being pushed. We use Enterprise Ghost for other processes, and it doesn't work well in this large of scale, plus it is really quite expensive for a once-a-year task like this. EDIT: There is a hard requirement that the remote machines on the floor are running Windows. The control machines do not have a hard OS requirement. I would really like to stay away from Multicast because of complications with upstream routers. Is Multicast or BitTorrent the better way to go on this? Is there another protocol that might work better?

    Read the article

  • HTTP 2.0 : Microsoft propose « HTTP Speed+Mobility » pour augmenter la vitesse du Web

    HTTP 2.0 : Microsoft propose « HTTP Speed+Mobility » pour augmenter la vitesse du Web Microsoft veut augmenter la vitesse du Web et propose à L'IETF, Internet Engineering Task Force, l'organisme chargé de la standardisation de l'internet, des éléments pour le protocole HTTP 2.0. Après Google avec son projet SPDY ayant pour objectif de doubler la vitesse du Web en apportant des ajustements au protocole HTTP par une couche supérieure, c'est au tour de la firme de Redmond de montrer son intérêt pour l'avenir du Web. Dans un billet de blog publié récemment, la firme présente sa proposition ?HTTP Speed+Mobility? qui sera soumise au groupe de travail HTTPbis.

    Read the article

  • Why is Clean Code suggesting avoiding protected variables?

    - by Matsemann
    Clean Code suggests avoiding protected variables in the "Vertical Distance" section of the "Formatting" chapter: Concepts that are closely related should be kept vertically close to each other. Clearly this rule doesn't work for concepts that belong in separate files. But then closely related concepts should not be separated into different files unless you have a very good reason. Indeed, this is one of the reasons that protected variables should be avoided. What is the reasoning?

    Read the article

  • Apache: redirect to https before AUTH for server-status

    - by Putnik
    I want to force https and basic auth for server-status output (mod_status). If I enable auth and user asks for http://site/server-status apache first asks for pass, then redirects to httpS, then asks for pass again. This question is similar to Apache - Redirect to https before AUTH and force https with apache before .htpasswd but I cannot get it work because we are speaking not about generic folder but Location structure. My config (shortly) is as follows: <Location /server-status> SSLRequireSSL <IfModule mod_rewrite.c> RewriteEngine on RewriteBase /server-status RewriteCond %{HTTPS} off RewriteCond %{SERVER_PORT} 80 RewriteRule ^ - [E=nossl] RewriteRule (.*) https://site/server-status} [R=301,L] </IfModule> SetHandler server-status Order deny,allow Deny from all Allow from localhost ip6-localhost Allow from 1.2.3.0/24 Allow from env=nossl AuthUserFile /etc/httpd/status-htpasswd AuthName "Password protected" AuthType Basic Require valid-user Satisfy any </Location> I assume Allow from env=nossl should allow everyone with RewriteCond %{HTTPS} off and server port 80, then force it to redirect but it does not work. Please note, I do not want force to SSL the whole site but /server-status only. If it matters the server has several sites. What am I doing wrong? Thank you.

    Read the article

  • Default /server-status location not inheriting in Apache

    - by rmalayter
    I'm having a problem getting /server-status to work Apache 2.2.14 on Ubuntu Server 10.04.1. The default symlinks for status.load and status.conf are present in /etc/apache2/mods-enabled. The status.conf does include the location /server-status and appropriate allow/deny directives. However, the only vhost I have in sites-enabled looks like this. The idea is to proxy anything with a Tomcat URL to a cluster of tomcats, and anything else to an IIS box. However, this seems to result in requests to /server-status being sent to IIS. Copying the /server-status in explicitly to the Vhost configuration doesn't seem to help, no matter what order I use. Is it possible to include /server-status do this within a vhost configuration that has a "default" proxy rule?: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED <Proxy balancer://tomcatCluster> BalancerMember ajp://qa-app1:8009 route=1 BalancerMember ajp://qa-app2:8009 route=2 ProxySet stickysession=ROUTEID </Proxy> <ProxyMatch "^/(mytomcatappA|mytomcatappB)/(.*)" > ProxyPassMatch balancer://tomcatCluster/$1/$2 </ProxyMatch> #proxy anything that's not a tomcat URL to IIS on port 80 <Proxy /> ProxyPass http://qa-web1/ </Proxy>

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >