Search Results

Search found 980 results on 40 pages for 'directive'.

Page 9/40 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Add trailing slash when it's missing in nginx

    - by vvanscherpenseel
    I'm running Magento on Nginx using this config: http://www.magentocommerce.com/wiki/1_-_installation_and_configuration/configuring_nginx_for_magento. Now I want to 301 all URLs without trailing slash to their counterpart that includes a trailing slash. For example: /contacts to /contacts/. I've tried virtually all the nginx directives on this I could find, but to no avail. For example, the directive specified in nginx- Rewrite URL with Trailing Slash leads to a redirect to /index.php/. Which directive should I add and where?

    Read the article

  • Understanding RewriteCond in .htacces files

    - by Paulo Bu
    I'm having problems understanding how RewriteCond directive works. So far, it's pretty clear that it compares to strings to apply a RewriteRule. I have this file: <IfModule rewrite_module> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ app_dev.php </IfModule> This works for me but I don't know why it works. So far in the RewriteCond directive I understand: if the value of REQUEST_FILENAME is NOT a file in the hard drive then allow the rule This doesn't have sense becouse app_dev.php after substituting is a file in the hard drive. Anyways, could someone enlighten me with this issue? I am having a very harsh time figuring out how this works.

    Read the article

  • nginx conditional Accept header

    - by manu_v
    Some mobile devices send the following incorrect requests to our servers : GET / HTTP/1.0 Accept: User-Agent : xxx The empty Accept header causes our Ruby on Rails server to throw back a 500 error. In Apache, the following directive allows us to rewrite the header before sending it to the application RoR server in order to cope with the broken devices : RequestHeader edit Accept ^$ "*/*" early We're currently setting up nginx, but achieving the same work-around is proving difficult. We are able to set : proxy_set_header Accept */*; However, this seems to have to be done inconditionally. Whenever trying to do : if ($http_accept !~ ".") { proxy_set_header Accept */*; } It complains with the message : "proxy_set_header" directive is not allowed here So, using nginx, how can we set the HTTP Accept header to */* when it is empty before sending the request to the application server ?

    Read the article

  • htaccess filesMatch exclusion

    - by Hikari
    I have the following directive in my htaccess <filesMatch "\.(gif|jpe?g|png|js|css|swf|php|ico|txt|pdf|xml|html?)$"> FileETag None <ifModule mod_headers.c> Header unset ETag Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate" Header set Pragma "no-cache" Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT" </ifModule> </filesMatch> I copied that regex from someplace in Web months ago. It should add those headers to any HTTP Response that does NOT have those extensions. But it's not working, it's adding them to any Response. I also need to create another directive to add Header set Cache-Control "max-age=3600, public" to Responses of files that DOES have them. Could anybody help me make proper fileMatch regexes?

    Read the article

  • Serving protected files using Nginx's X-Accel-Redirect header

    - by andybak
    I'm trying to serve protected files using this directive in my nginx.conf: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/secure/; } I'm passing in paths in the form: "/myfile.doc" and the file's path would be: /home/ldr/webapps/nginx/app/secure/myfile.doc I just get 404's when I access "http: //myserver/secure/myfile.doc" (space inserted after http to stop ServerFault converting it to a link) I've tried taking the trailing / off the location directive and that makes no difference. Two questions: How do I fix it! How can I debug problems like this myself? How can I get Nginx to report which path it's looking for? error.log shows nothing and access.log just tells me which url is being requested - this is the bit I already know! It's no fun trying things randomly without any feedback. Here's my entire nginx.conf: daemon off; worker_processes 2; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 21534; server_name my.server.com; client_max_body_size 5m; location /media/ { alias /home/ldr/webapps/nginx/app/media/; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; fastcgi_pass unix:/home/ldr/webapps/nginx/app/myproject/django.sock; fastcgi_pass_header Authorization; fastcgi_hide_header X-Accel-Redirect; fastcgi_hide_header X-Sendfile; fastcgi_intercept_errors off; include fastcgi_params; } location /secure { internal; alias /home/ldr/webapps/nginx/app/secure/; } } } EDIT: I'm trying some of the suggestions here So I've tried: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/; } both with and without the trailing slash on location. I've also tried moving this block before the "location /" directive. The page I linked to has ^~ after 'location' giving: location ^~ /secure/ { ...etc... Not sure what that signifies but it didn't work either!

    Read the article

  • JavaScript local alias pattern

    - by Bertrand Le Roy
    Here’s a little pattern that is fairly common from JavaScript developers but that is not very well known from C# developers or people doing only occasional JavaScript development. In C#, you can use a “using” directive to create aliases of namespaces or bring them to the global scope: namespace Fluent.IO { using System; using System.Collections; using SystemIO = System.IO; In JavaScript, the only scoping construct there is is the function, but it can also be used as a local aliasing device, just like the above using directive: (function($, dv) { $("#foo").doSomething(); var a = new dv("#bar"); })(jQuery, Sys.UI.DataView); This piece of code is making the jQuery object accessible using the $ alias throughout the code that lives inside of the function, without polluting the global scope with another variable. The benefit is even bigger for the dv alias which stands here for Sys.UI.DataView: think of the reduction in file size if you use that one a lot or about how much less you’ll have to type… I’ve taken the habit of putting almost all of my code, even page-specific code, inside one of those closures, not just because it keeps the global scope clean but mostly because of that handy aliasing capability.

    Read the article

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • Properly force SSL with .htaccess, no double authentication

    - by cwd
    I'm trying to force SSL with .htaccess on a shared host. This means there I only have access to .htaccess and not the vhosts config. I know you can put a rule in the VirtualHost config file to force SSL which will be picked up there (and acted upon first), preventing double authentication, but I can't get to that. Here's the progress I've made: Config 1 This works pretty well but it does force double authentication if you visit http://site.com - once for http and then once for https. Once you are logged in, it automatically redirects http://site.com/page1.html to the https coutnerpart just fine: RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteEngine on RewriteCond %{HTTP_HOST} !(^www\.site\.com*)$ RewriteRule (.*) https://www.site.com$1 [R=301,L] AuthName "Locked" AuthUserFile "/home/.htpasswd" AuthType Basic require valid-user Config 2 If I add this to the top of the file, it works a lot better in that it will switch to SSL before prompting for the password: SSLOptions +StrictRequire SSLRequireSSL SSLRequire %{HTTP_HOST} eq "site.com" ErrorDocument 403 https://site.com It's clever how it will use the SSLRequireSSL option and the ErrorDocument403 to redirect to the secure version of the site. My only complaint is that if you try and access http://site.com/page1.html it will redirect to https://site.com/ So it is forcing SSL without a double-login, but it is not properly forwarding non-SSL resources to their SSL counterparts. Regarding the first config, Insyte mentioned "using mod_rewrite to perform a simple redirect is a bit of overkill. Use the Redirect directive instead. It's possible this may even fix your problem, as I believe mod_rewrite rules are some of the last directives to be processed, just before the file is actually grabbed from the filesystem" I have not had no such luck on finding a force-ssl config option with the redirect directive and so have been unable to test this theory.

    Read the article

  • How do I map some subdirectories to run alongside a Drupal site?

    - by paradroid
    I have a Drupal site running on Apache using the following vhosts file: <VirtualHost xx.xx.xx.xx:80> ServerName bananas.net ServerAlias www.bananas.net DocumentRoot /var/www/drupal/ RewriteEngine On RewriteCond %{HTTP_HOST} !=bananas.net [NC] RewriteRule ^(.*)$ http://bananas.net$1 [L,R=301] <Directory /var/www/bananas.net/> Options -Indexes FollowSymlinks AllowOverride All Order allow,deny Allow from all </Directory> CustomLog ${APACHE_LOG_DIR}/access.log combined ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> I set it up some time ago, so I am not sure what the <Directory /var/www/bananas.net/> directive was meant for. That directory is currently empty. With the vhosts file the way it is, does the Directory directive have any effect at all? I want to add some content which is separate from the Drupal site. How do I add sub-directories within /var/www/bananas.net/ which can be accessed alongside the Drupal site running at the root? As they have nothing to do with the Drupal site, I want to keep the files separate, but still using the same domain.

    Read the article

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • OpenVPN - Windows 8 to Windows 2008 Server, not connecting

    - by niico
    I have followed this tutorial about setting up an OpenVPN Server on Windows Server - and a client on Windows (in this case Windows 8). The server appears to be running fine - but it is not connecting with this error: Mon Jul 22 19:09:04 2013 Warning: cannot open --log file: C:\Program Files\OpenVPN\log\my-laptop.log: Access is denied. (errno=5) Mon Jul 22 19:09:04 2013 OpenVPN 2.3.2 x86_64-w64-mingw32 [SSL (OpenSSL)] [LZO] [PKCS11] [eurephia] [IPv6] built on Jun 3 2013 Mon Jul 22 19:09:04 2013 MANAGEMENT: TCP Socket listening on [AF_INET]127.0.0.1:25340 Mon Jul 22 19:09:04 2013 Need hold release from management interface, waiting... Mon Jul 22 19:09:05 2013 MANAGEMENT: Client connected from [AF_INET]127.0.0.1:25340 Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'state on' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'log all on' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'hold off' Mon Jul 22 19:09:05 2013 MANAGEMENT: CMD 'hold release' Mon Jul 22 19:09:05 2013 Socket Buffers: R=[65536->65536] S=[65536->65536] Mon Jul 22 19:09:05 2013 UDPv4 link local: [undef] Mon Jul 22 19:09:05 2013 UDPv4 link remote: [AF_INET]66.666.66.666:9999 Mon Jul 22 19:09:05 2013 MANAGEMENT: >STATE:1374494945,WAIT,,, Mon Jul 22 19:10:05 2013 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity) Mon Jul 22 19:10:05 2013 TLS Error: TLS handshake failed Mon Jul 22 19:10:05 2013 SIGUSR1[soft,tls-error] received, process restarting Mon Jul 22 19:10:05 2013 MANAGEMENT: >STATE:1374495005,RECONNECTING,tls-error,, Mon Jul 22 19:10:05 2013 Restart pause, 2 second(s) Note I have changed the IP and port no (it uses a non-standard port for security reasons). That port is open on the hardware firewall. The server logs are showing a connection attempt from my client: TLS: Initial packet from [AF_INET]118.68.xx.xx:65011, sid=081af4ed xxxxxxxx Mon Jul 22 14:19:15 2013 118.68.xx.xx:65011 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity) How can I problem solve this & find the problem? Thx Update - Client config file: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote 00.00.00.00 1194 ;remote 00.00.00.00 9999 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca "C:\\Program Files\\OpenVPN\\config\\ca.crt" cert "C:\\Program Files\\OpenVPN\\config\\my-laptop.crt" key "C:\\Program Files\\OpenVPN\\config\\my-laptop.key" # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 Server config file: ################################################# # Sample OpenVPN 2.0 config file for # # multi-client server. # # # # This file is for the server side # # of a many-clients <-> one-server # # OpenVPN configuration. # # # # OpenVPN also supports # # single-machine <-> single-machine # # configurations (See the Examples page # # on the web site for more info). # # # # This config should work on Windows # # or Linux/BSD systems. Remember on # # Windows to quote pathnames and use # # double backslashes, e.g.: # # "C:\\Program Files\\OpenVPN\\config\\foo.key" # # # # Comments are preceded with '#' or ';' # ################################################# # Which local IP address should OpenVPN # listen on? (optional) ;local 00.00.00.00 # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. std 1194 port 1194 # TCP or UDP server? ;proto tcp proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca "C:\\Program Files\\OpenVPN\\config\\ca.crt" cert "C:\\Program Files\\OpenVPN\\config\\server.crt" key "C:\\Program Files\\OpenVPN\\config\\server.key" # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh "C:\\Program Files\\OpenVPN\\config\\dh2048.pem" # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. ;push "route 192.168.10.0 255.255.255.0" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). ;push "redirect-gateway def1 bypass-dhcp" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 208.67.222.222" ;push "dhcp-option DNS 208.67.220.220" # Uncomment this directive to allow differenta # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. ;user nobody ;group nobody # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I have changed IP's for security

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • Reference DLLs not loading in Visual Studio 2010

    - by Adam Haile
    I'm working on a C# 4.0 project in VS2010 and needed to use some older DLLs containing controls that were created in C# 3.5 on VS2008. When I first add the DLLs to the references, I was able to see the namespace via intellisense and create an instance of one of the controls, but when I go to build, it gives me the following error: The type or namespace name 'BCA' could not be found (are you missing a using directive or an assembly reference?) And I do have a using directive for that namespace already, which is now underlined in red, showing that VS cannot find it. And now, intellisense won't pick up that namespace at all. I even tried added the controls to the toolbox (which worked) but then when I drag them to the GUI, it says that it cannot locate the DLL reference, even though it obviously knows where it is. I even tried changing the target framework to 3.5, but still with the same results. Any thoughts as to why this could be happening?

    Read the article

  • Problem creating site using Microsoft Visual Web Developer Express 2008

    - by Peter
    Hi, this is a very newbie question, sorry! I need to create an aspx website based con C# and am calling some webservices based on some DLL's I already have. Beforem purchasing Visual Studio, I decided to try Microsoft Visual Web Developer Express (is this ok?) creating a Web Application ASP.NET based on Visual C#. I created the form to enter the data which is submitted when clicking the process button. At this point I need to call stuff from the DLL, which I have added in the Solution Explorer via Add Reference, selecting the DLL from the COM list. But whenever I run the project, I always get the error "the type or namespace xxx cannot be found - maybe a using directive or assembler directive is missing" when trying to create the object. What is my stupid mistake? Thanks!

    Read the article

  • Extract string from string using Regex

    - by stacker
    I want to extract MasterPage value directive from a view page. I want the fastest way to do that, taking into account a very big aspx pages, and a very small ones. I think the best way is to do that with two stages: Extract the page directive section from the view (using Regex): <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %> Then, extract the value inside MasterPageFile attribute. The result needs to be: ~/Views/Shared/Site.Master. Can I have help from someone to implement this? I badly want to use only regex, but I don't Regex expert. Another thing, Do you think that Regex is the fastest way to do this?

    Read the article

  • RESTful issue with data access when using HTTP DELETE method ...

    - by Wilhelm Murdoch
    I'm having an issue accessing raw request information from PHP when accessing a script using the HTTP DELETE directive. I'm using a JS front end which is accessing a script using Ajax. This script is actually part of a RESTful API which I am developing. The endpoint in this example is: http://api.site.com/session This endpoint is used to generate an authentication token which can be used for subsequent API requests. Using the GET method on this URL along with a modified version of HTTP Basic Authentication will provide an access token for the client. This token must then be included in all other interactions with the service until it expires. Once a token is generated, it is passed back to the client in a format specified by an 'Accept' header which the client sends the service; in this case 'application/json'. Upon success it responds with an HTTP 200 Ok status code. Upon failure, it throws an exception using the HTTP 401 Authorization Required code. Now, when you want to delete a session, or 'log out', you hit the same URL, but with the HTTP DELETE directive. To verify access to this endpoint, the client must prove they were previously authenticated by providing the token they want to terminate. If they are 'logged in', the token and session are terminated and the service should respond with the HTTP 204 No Content status code, otherwise, they are greeted with the 401 exception again. Now, the problem I'm having is with removing sessions. With the DELETE directive, using Ajax, I can't seem to access any parameters I've set once the request hits the service. In this case, I'm looking for the parameter entitled 'token'. I look at the raw request headers using Firebug and I notice the 'Content-Length' header changes with the size of the token being sent. This is telling me that this data is indeed being sent to the server. The question is, using PHP, how the hell to I access parameter information? It's not a POST or GET request, so I can't access it as you normally would in PHP. The parameters are within the content portion of the request. I've tried looking in $_SERVER, but that shows me limited amount of headers. I tried 'apache_request_headers()', which gives me more detailed information, but still, only for headers. I even tried 'file_get_contents('php://stdin');' and I get nothing. How can I access the content portion of a raw HTTP request? Sorry for the lengthy post, but I figured too much information is better than too little. :)

    Read the article

  • Appengine not liking my .jspx files

    - by Hans Westerbeek
    I have a little app that runs fine on local dev appengine, but appengine itself is not processing my .jspx files. The jspx files are in WEB-INF so they should not be excluded by appengine (as a static resource) I am using Apache Tiles to define my views. So the html produced looks like this: <html xmlns:jsp="http://java.sun.com/JSP/Page" xmlns:c="http://java.sun.com/jsp/jstl/core" xmlns:tiles="http://tiles.apache.org/tags-tiles" > <jsp:output omit-xml-declaration="yes"/> <jsp:directive.page contentType="text/html;charset=UTF-8" /> <jsp:directive.page isELIgnored="false"/> (etc etc) How can I solve this problem?

    Read the article

  • Why can one aliased C# Type not be accessed by another?

    - by jdk
    good stuff // ok to alias a List Type using AliasStringList = System.Collections.Generic.List<string>; // and ok to alias a List of Lists like this using AliasListOfStringList1 = System.Collections.Generic.List<System.Collections.Generic.List<string>>; bad stuff // However **error** to alias another alias using AliasListOfStringList2 = System.Collections.Generic.List<AliasStringList>; Produces the compile error The type or namespace name 'AliasStringList' could not be found (are you missing a using directive or an assembly reference?) Note: this is the using directive not the using statement.

    Read the article

  • How to get `gcc` to generate `bts` instruction for x86-64 from standard C?

    - by Norman Ramsey
    Inspired by a recent question, I'd like to know if anyone knows how to get gcc to generate the x86-64 bts instruction (bit test and set) on the Linux x86-64 platforms, without resorting to inline assembly or to nonstandard compiler intrinsics. Related questions: Why doesn't gcc do this for a simple |= operation were the right-hand side has exactly 1 bit set? How to get bts using compiler intrinsics or the asm directive Portability is more important to me than bts, so I won't use and asm directive, and if there's another solution, I prefer not to use compiler instrinsics. EDIT: The C source language does not support atomic operations, so I'm not particularly interested in getting atomic test-and-set (even though that's the original reason for test-and-set to exist in the first place). If I want something atomic I know I have no chance of doing it with standard C source: it has to be an intrinsic, a library function, or inline assembly. (I have implemented atomic operations in compilers that support multiple threads.)

    Read the article

  • Implicit casting Integer calculation to float in C++

    - by Ziddiri
    Is there any compiler that has a directive or a parameter to cast integer calculation to float implicitly. For example: float f = (1/3)*5; cout << f; the "f" is "0", because calculation's constants(1, 3, 10) are integer. I want to convert integer calculation with a compiler directive or parameter. I mean, I won't use explicit casting or ".f" prefix like that: float f = ((float)1/3)*5; or float f = (1.0f/3.0f)*5.0f; Do you know any c/c++ compiler which has any parameter to do this process without explicit casting or ".f" thing?

    Read the article

  • ngModel and component with isolated scope

    - by Artem Andreev
    I am creating simple ui-datetime directive. It splits javascript Date object into _date, _hours and _minutes parts. _date uses jquery ui datepicker, _hours and _minutes - number inputs. See example: http://jsfiddle.net/andreev_artem/nWsZp/3/ On github: https://github.com/andreev-artem/angular_experiments/tree/master/ui-datetime As far as I understand - best practice when you create a new component is to use isolated scope. When I tried to use isolated scope - nothing works. ngModel.$viewValue === undefined. When I tried to use new scope (my example, not so good variant imho) - ngModel uses value on newly created scope. Of course I can create directive with isolated scope and work with ngModel value through "=expression" (example). But I think that working with ngModelController is a better practice. My questions: Can I use ngModelController with isolated scope? If it is not possible which solution is better for creating such component?

    Read the article

  • Create class instance in assembly from string name

    - by Arcadian
    I'm not sure if this is possible, and I'm quite new to using assemblies in C#.NET. What I would like to do is to create an instance of a class when supplied the string name of that class. Something like this: using MyAssembly; namespace MyNameSpace { Class MyClass { int MyValue1; int MyValue2; public MyClass(string myTypeName) { foreach(Type type in MyAssembly) { if((string)type == myTypeName) { //create a new instance of the type } } AssignInitialValues(//the type created above) } //Here I use an abstract type which the type above inherits from private void AssignInitialValues(AbstractType myClass) { this.value1 = myClass.value1; this.value2 = myClass.value2; } } } Obviously you cannot compare strings to types but it illustrates what I'm trying to do: create a type from a supplied string. Any thoughts? EDIT: After attempting: var myObject = (AbstractType) Activator.CreateInstance(null, myTypeName); AssignInitialValues(myObject); I get a number of errors: Inconsistent accessibility: parameter type 'MyAssembly.AbstractType' is less accessible than method 'MyNameSpace.MyClass.AssignInitialValues(MyAssembly.AstractType)' 'MyAssembly.AstractType' is inaccessible due to it's protection level The type or namespace name 'MyAssembly' could not be found (are you missing a using directive or an assembly reference?) The type or namespace name 'AbstractType' could not be found (are you missing a using directive or an assembly reference?) Not exactly sure why it can't find the assembly; I've added a reference to the assembly and I use a Using Directive for the namespace in the assembly. As for the protection level, it's calling classes (or rather the constructors of classes) which can only be public. Any clues on where the problem is? UPDATE: After looking through several articles on SO I came across this: http://stackoverflow.com/a/1632609/360627 Making the AbstractTypeclass public solved the issue of inconsistent accessibility. The new compiler error is this: Cannot convert type 'System.Runtime.Remoting.ObjectHandle' to 'MyAssembly.AbstractType' The line it references is this one: var myObject = (AbstractType) Activator.CreateInstance(null, myTypeName); Using .Unwrap() get's me past this error and I think it's the right way to do it (uncertain). However, when running the program I then get a TypeLoadException when this code is called. TypeLoadException: Could not load type ‘AbstractType’ from assembly ‘MyNameSpace'... Right away I can spot that the type its looking for is correct but the assembly it's looking in is wrong. Looking up the Activator.CreateInstance(String, String) method revealed that the null as the first argument means that the method will look in the executing assembly. This is contrary to the required behavior as in the original post. I've tried using MyAssembly as the first argument but this produces the error: 'MyAssembly' is a 'namespace' but is used like a 'variable' Any thoughts on how to fix this?

    Read the article

  • how to run line by line in text file - on windows mobile ?

    - by Gold
    hi in WinForm on PC i use to run like this: FileStream FS = null; StreamWriter SW = null; FS = new FileStream(@"\Items.txt", FileMode.Create, FileAccess.ReadWrite, FileShare.ReadWrite); SW = new StreamWriter(FS, Encoding.Default); while (SW.Peek() != -1) { TEMP = (SW.ReadLine()); } but when i try this on Windows-mobile i get error: Error 1 'System.IO.StreamWriter' does not contain a definition for 'Peek' and no extension method 'Peek' accepting a first argument of type 'System.IO.StreamWriter' could be found (are you missing a using directive or an assembly reference?) Error 2 'System.IO.StreamWriter' does not contain a definition for 'ReadLine' and no extension method 'ReadLine' accepting a first argument of type 'System.IO.StreamWriter' could be found (are you missing a using directive or an assembly reference?) how to do it ? thanks

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >