Search Results

Search found 17984 results on 720 pages for 'log shipping'.

Page 159/720 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • SSL_CLIENT_CERT_CHAIN not being passed to backend server

    - by nidkil
    I have client certificate configured and working in Apache. I want to pass the PEM-encoded X.509 certificates of the client to the backend server. I tried with the SSLOptions +ExportCertData. This does nothing at all, while the documentation states it should add SSL_SERVER_CERT, SSL_CLIENT_CERT and SSL_CLIENT_CERT_CHAINn (with n = 0,1,2,..) as headers. Any ideas why this option is not working? I then tried setting the headers myself using RequestHeader. This works fine for all variables except SSL_CLIENT_CERT_CHAIN. It shows null in the header. Any ideas why the certificate chain is not being filled? This is my first Apache configuration: <VirtualHost 192.168.56.100:443> ServerName www.test.org ServerAdmin webmaster@localhost DocumentRoot /var/www ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/certs/www.test.org.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.test.org.key SSLCACertificateFile /etc/apache2/ssl/ca/ca.crt <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> <Location /carbon> ProxyPass http://www.test.org:9763/carbon ProxyPassReverse http://www.test.org:9763/carbon </Location> <Location /services/GbTestProxy> SSLVerifyClient require SSLVerifyDepth 5 SSLOptions +ExportCertData ProxyPass http://www.test.org:8888/services/GbTestProxy ProxyPassReverse http://www.test.org:8888/services/GbTestProxy </Location> </VirtualHost> This is my second Apache configuration: <VirtualHost 192.168.56.100:443> ServerName www.test.org ServerAdmin webmaster@localhost DocumentRoot /var/www ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/certs/www.test.org.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.test.org.key SSLCACertificateFile /etc/apache2/ssl/ca/ca.crt <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> <Location /carbon> ProxyPass http://www.test.org:9763/carbon ProxyPassReverse http://www.test.org:9763/carbon </Location> <Location /services/GbTestProxy> SSLVerifyClient require SSLVerifyDepth 5 RequestHeader set SSL_CLIENT_S_DN "%{SSL_CLIENT_S_DN}s" RequestHeader set SSL_CLIENT_I_DN "%{SSL_CLIENT_I_DN}s" RequestHeader set SSL_CLIENT_S_DN_CN "%{SSL_SERVER_S_DN_CN}s" RequestHeader set SSL_SERVER_S_DN_OU "%{SSL_SERVER_S_DN_OU}s" RequestHeader set SSL_CLIENT_CERT "%{SSL_CLIENT_CERT}s" RequestHeader set SSL_CLIENT_CERT_CHAIN0 "%{SSL_CLIENT_CERT_CHAIN0}s" RequestHeader set SSL_CLIENT_CERT_CHAIN1 "%{SSL_CLIENT_CERT_CHAIN1}s" RequestHeader set SSL_CLIENT_VERIFY "%{SSL_CLIENT_VERIFY}s" ProxyPass http://www.test.org:8888/services/GbTestProxy ProxyPassReverse http://www.test.org:8888/services/GbTestProxy </Location> </VirtualHost> Hope someone can help. Regards, nidkil

    Read the article

  • Why doesn't Apache start from xampp control panel after changes to vhosts config?

    - by Grafica
    I'm running xampp on my local server, and want to host multiple sites, so I changed the httpd-vhosts.conf file. Will somebody let me know if there is something wrong with my code? Apache was running while I had only one site in the config, but after I added another site, I stopped apache, and I'm not able to restart it. # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # ##NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host.localhost" ##ServerName dummy-host.localhost ##ServerAlias www.dummy-host.localhost ##ErrorLog "logs/dummy-host.localhost-error.log" ##CustomLog "logs/dummy-host.localhost-access.log" combined ##</VirtualHost> ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host2.localhost" ##ServerName dummy-host2.localhost ##ServerAlias www.dummy-host2.localhost ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined ##</VirtualHost> NameVirtualHost * <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName localhost </VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName evamagnus.com <Directory "C:\xampp\htdocs\"> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs2\" ServerName mygrafica.com <Directory "C:\xampp\htdocs2\"> Order allow,deny Allow from all </Directory> </VirtualHost> Here is what it says in the control panel: 2:17:37 PM [apache] Starting apache service... 2:17:38 PM [apache] Status change detected: running 2:17:39 PM [apache] Status change detected: stopped Thanks in advance.

    Read the article

  • Nginx Retry of Requests ( Nginx - Haproxy Combination )

    - by vaibhav
    I wanted to ask about Nginx Retry of Requests. I have a Nginx running at the backend which then sends the requests to HaProxy which then passes it on the web server and the request is processed. I am reloading my Haproxy config dynamically to provide elasticity. The problem is that the requests are dropped when I reload Haproxy. So I wanted to have a solution where I can just retry that from Nginx. I looked through the proxy_connect_timeout, proxy_next_upstream in http module and max_fails and fail_timeout in server module. I initially only had 1 server in the upstream connections so I just that up twice now and less requests are getting dropped ( only when ) have say the same server twice in upstream , if I have same server 3-4 times drops increase ). So , firstly I wanted to now , that when a request is not able to establish connection from Nginx to Haproxy so while reloading it seems that conneciton is seen as error and straightway the request is dropped . So how can I either specify the time after the failure I want to retry the request from Nginx to upstream or the time before which Nginx treats it as failed request. ( I have tried increaing proxy_connect_timeout - didn't help , mail_retires , fail_timeout and also putting the same upstream server twice ( that gave the best results so far ) Nginx Conf File upstream gae_sleep { server 128.111.55.219:10000; } server { listen 8080; server_name 128.111.55.219; root /var/apps/sleep/app; # Uncomment these lines to enable logging, and comment out the following two #access_log /var/log/nginx/sleep.access.log upstream; error_log /var/log/nginx/sleep.error.log; access_log off; #error_log /dev/null crit; rewrite_log off; error_page 404 = /404.html; set $cache_dir /var/apps/sleep/cache; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://gae_sleep; client_max_body_size 2G; proxy_connect_timeout 30; client_body_timeout 30; proxy_read_timeout 30; } location /404.html { root /var/apps/sleep; } location /reserved-channel-appscale-path { proxy_buffering off; tcp_nodelay on; keepalive_timeout 55; proxy_pass http://128.111.55.219:5280/http-bind; } }

    Read the article

  • Torque jobs does not enter "E" state (unless "qrun")

    - by Vi.
    Jobs I add to the queue stays there in "Queued" state without attempts to be executed (unless I manually qrun them) /var/spool/torque/server_logs say just 04/11/2011 12:43:27;0100;PBS_Server;Job;16.localhost;enqueuing into batch, state 1 hop 1 04/11/2011 12:43:27;0008;PBS_Server;Job;16.localhost;Job Queued at request of test@localhost, owner = test@localhost, job name = Qqq, queue = batch The job requires just 1 CPU on 1 node. # qmgr -c "list queue batch" Queue batch queue_type = Execution total_jobs = 0 state_count = Transit:0 Queued:0 Held:0 Waiting:0 Running:0 Exiting:0 max_running = 3 acl_host_enable = True acl_hosts = localhost resources_min.ncpus = 1 resources_min.nodect = 1 resources_default.ncpus = 1 resources_default.nodes = 1 resources_default.walltime = 00:00:10 mtime = Mon Apr 11 12:07:10 2011 resources_assigned.ncpus = 0 resources_assigned.nodect = 0 kill_delay = 3 enabled = True started = True I can't set resources_assigned to nonzero because of Cannot set attribute, read only or insufficient permission resources_assigned.ncpus. When I qrun some task, this goes to mom's log: 04/11/2011 21:27:48;0001; pbs_mom;Svr;pbs_mom;LOG_DEBUG::mom_checkpoint_job_has_checkpoint, FALSE 04/11/2011 21:27:48;0001; pbs_mom;Job;TMomFinalizeJob3;job 18.localhost started, pid = 28592 04/11/2011 21:27:48;0080; pbs_mom;Job;18.localhost;scan_for_terminated: job 18.localhost task 1 terminated, sid=28592 04/11/2011 21:27:48;0008; pbs_mom;Job;18.localhost;job was terminated 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;top of preobit_reply 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;DIS_reply_read/decode_DIS_replySvr worked, top of while loop 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;in while loop, no error from job stat 04/11/2011 21:27:48;0080; pbs_mom;Job;18.localhost;obit sent to server Scheduler log (/var/spool/torque/sched_logs/20110705): 07/05/2011 21:44:53;0002; pbs_sched;Svr;Log;Log opened 07/05/2011 21:44:53;0002; pbs_sched;Svr;TokenAct;Account file /var/spool/torque/sched_priv/accounting/20110705 opened 07/05/2011 21:44:53;0002; pbs_sched;Svr;main;/usr/sbin/pbs_sched startup pid 16234 qstat -f: Job Id: 26.localhost Job_Name = qwe Job_Owner = test@localhost job_state = Q queue = batch server = localhost Checkpoint = u ctime = Tue Jul 5 21:43:31 2011 Error_Path = localhost:/home/test/jscfi/default/0.738784810485275/qwe.e26 Hold_Types = n Join_Path = n Keep_Files = n Mail_Points = a mtime = Tue Jul 5 21:43:31 2011 Output_Path = localhost:/home/test/jscfi/default/0.738784810485275/qwe.o26 Priority = 0 qtime = Tue Jul 5 21:43:31 2011 Rerunable = True Resource_List.ncpus = 1 Resource_List.neednodes = 1:ppn=1 Resource_List.nodect = 1 Resource_List.nodes = 1:ppn=1 Resource_List.walltime = 00:01:00 substate = 10 Variable_List = PBS_O_HOME=/home/test,PBS_O_LANG=en_US.UTF-8, PBS_O_LOGNAME=test, PBS_O_PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games, PBS_O_MAIL=/var/mail/test,PBS_O_SHELL=/bin/sh,PBS_SERVER=127.0.0.1, PBS_O_WORKDIR=/home/test/jscfi/default/0.738784810485275, PBS_O_QUEUE=batch,PBS_O_HOST=localhost euser = test egroup = test queue_rank = 1 queue_type = E etime = Tue Jul 5 21:43:31 2011 submit_args = run.pbs Walltime.Remaining = 6 fault_tolerant = False How to make it execute jobs automatically, without manual qrun?

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • How to logon with local account? RODC "There are no logon servers to process your request"

    - by g18c
    I have a site-to-site VPN, writeable DC in main office, Read-only DC. Today the VPN went down, but i couldnt log in to the read-only DC - the error message came up There are no logon servers to process your request. Since the RODC is a domain controller, there is no local administrator. How can i ensure that i am always able to log on to the RODC with a known account in an emergency if the writeable DC is not available?

    Read the article

  • How to setup nginx and a subdomain

    - by Evolutio
    i have gitlab installed on my server and it works on all domains eg: git.lars-dev.de, lars-dev.de and *.lars-dev.de how I can run gitlab only on git.lars-dev.de and another subdomain on files.lars-dev.de? my lars-dev conf: server { listen *:80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www/webdata/lars-dev.de/htdocs; index index.html index.htm; server_name lars-dev.de; location / { try_files $uri $uri/ /index.html; } #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } and the gitlab configuration: upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.lars-dev.de; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } }

    Read the article

  • SQL server 2008 R2 installation error

    - by Sonia
    I have a windows 7,32 bit laptop. I am the administrator with all permissions. when I click on the SQL server 2008R2 set up file,it says : "SQL server set up has encountered the following error:Failed to retreive data for this request" click on OK. I have uninstalled all the components of SQL from control panel. I used Windows installer clean up to remove the files(which I must have not done ),but still no go. The summary.txt log says: Overall summary: Final result: Failed: see details below Exit code (Decimal): 847168662 Exit facility code: 638 Exit error code: 50326 Exit message: Failed to retrieve data for this request. Start time: 2012-05-25 14:59:15 End time: 2012-05-25 15:00:09 Requested action: RunRules Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120525_145905\Detail.txt Exception help link: http%3a%2f%2fgo.microsoft.com%2ffwlink%3fLinkId%3d20476%26ProdName%3dMicrosoft%2bSQL%2bServer%26EvtSrc%3dsetup.rll%26EvtID%3d50000%26ProdVer%3d10.0.5500.0%26EvtType%3d0xEF814B06%400x92D13C14 Machine Properties: Machine name: EWAN-PC Machine processor count: 4 OS version: Windows Vista OS service pack: Service Pack 1 OS region: Australia OS language: English (United States) OS architecture: x86 Process architecture: 32 Bit OS clustered: No Package properties: Description: SQL Server Database Services 2008 SQLProductFamilyCode: {628F8F38-600E-493D-9946-F4178F20A8A9} ProductName: SQL2008 Type: RTM Version: 10 SPLevel: 0 Installation location: c:\385030d65c6ff61fb9\x86\setup\ Installation edition: EXPRESS User Input Settings: ACTION: RunRules CONFIGURATIONFILE: FEATURES: HELP: False INDICATEPROGRESS: False INSTANCENAME: QUIET: False QUIETSIMPLE: False RULES: GLOBALRULES,SqlUnsupportedProductBlocker,PerfMonCounterNotCorruptedCheck,Bids2005InstalledCheck,BlockInstallSxS,AclPermissionsFacet,FacetDomainControllerCheck,SSMS_IsInternetConnected,FacetWOW64PlatformCheck,FacetPowerShellCheck X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20120525_145905\ConfigurationFile.ini Detailed results: Rules with failures: Global rules: There are no scenario-specific rules. Rules report file: The rule result report file is not available. Exception summary: The following is an exception stack listing the exceptions in outermost to innermost order Inner exceptions are being indented Exception type: Microsoft.SqlServer.Management.Sdk.Sfc.EnumeratorException Message: Failed to retrieve data for this request. Data: HelpLink.ProdName = Microsoft SQL Server HelpLink.BaseHelpUrl = http://go.microsoft.com/fwlink HelpLink.LinkId = 20476 DisableWatson = true Stack: at Microsoft.SqlServer.Setup.Chainer.Workflow.PendingActions.InvokeActions(WorkflowObject metaDb, TextWriter loggingStream) at Microsoft.SqlServer.Setup.Chainer.Workflow.ActionEngine.RunActionQueue() at Microsoft.SqlServer.Setup.Chainer.Workflow.Workflow.RunWorkflow(HandleInternalException exceptionHandler) at Microsoft.SqlServer.Chainer.Setup.Setup.RunRequestedWorkflow() at Microsoft.SqlServer.Chainer.Setup.Setup.Run() at Microsoft.SqlServer.Chainer.Setup.Setup.Start() at Microsoft.SqlServer.Chainer.Setup.Setup.Main() Inner exception type: Microsoft.SqlServer.Configuration.Sco.ScoException Message: Attempted to perform an unauthorized operation. Data: WatsonData = HKEY_LOCAL_MACHINE@SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\Microsoft SQL Server 10 Stack: at Microsoft.SqlServer.Configuration.Sco.InternalRegistryKey.OpenSubKey(String subkey, RegistryAccess requestedAccess) at Microsoft.SqlServer.Configuration.Sco.SqlRegistryKey.OpenSubKey(String subkey, RegistryAccess requestedAccess) at Microsoft.SqlServer.Discovery.RegistryKeyExistsPropertyValueProvider.GetPropertyValue(Object[] context) at Microsoft.SqlServer.Discovery.DiscoveryEnumObject.GetPropertyValueFromProvider(IPropertyValueProvider propertyValueProvider, String machineName, Object[] context) at Microsoft.SqlServer.Discovery.ObjectInstanceSettings.IsObjectFound(String machineName, String idFilter) at Microsoft.SqlServer.Discovery.Product.FilterObjectSet(ArrayList objects, String idFilter) at Microsoft.SqlServer.Discovery.Product.GetData(EnumResult erParent) at Microsoft.SqlServer.Management.Sdk.Sfc.Environment.GetData() at Microsoft.SqlServer.Management.Sdk.Sfc.Environment.GetData(Request req, Object ci) at Microsoft.SqlServer.Management.Sdk.Sfc.Enumerator.GetData(Object connectionInfo, Request request) at Microsoft.SqlServer.Management.Sdk.Sfc.Enumerator.Process(Object connectionInfo, Request request) Inner exception type: System.UnauthorizedAccessException Message: Attempted to perform an unauthorized operation. Stack: at Microsoft.SqlServer.Configuration.Sco.InternalRegistryKey.OpenSubKey(String subkey, RegistryAccess requestedAccess) Ineed to install SQL server 2008 R2 for one of the company softwares to work. Any immediate help will be greatly appreciated. Thanks Sonia

    Read the article

  • .htaccess time on godaddy

    - by doug
    Hi there I'm trying to run a cakephp application on a godaddy linux account. The problem is that i get the error 500. I've read on cakephp discussion group that i have to edit the .htaccess file. 1) How much do i have to wait until i see the result? 2) More information about this error may be available in the server error log. Where are those servers log on a godaddy linux hosted account?

    Read the article

  • Root users and mysql: `sudo mysql` vs `/root/.my.cnf`

    - by user67641
    I have a /root/.my.cnf file which stores the mysql root user's password: [client] password = "my password" When I log in as system root and enter mysql, I get a passwordless login: myuser@local:$ sudo su root@local:$ mysql mysql> But when I try to do the same just using sudo, I get access denied: myuser@local:$ sudo mysql ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) How can I get sudo mysql to log me in as the mysql root user, without entering a password?

    Read the article

  • Machine check events logged

    - by GoldenNewby
    In /var/log/messages, this error occurred: Sep 19 13:18:15 wdc kernel: [2772302.630416] Machine check events logged Shortly there after, the entire server became unresponsive. This is in the log of the Dom0 for a Xen Server (running the latest version on Debian Squeeze). Can anyone shed some light on what this error means? Should I be ordering new hardware? Edit: Also, it seems to imply it logged something, where can I find that?

    Read the article

  • Apache2 return 404 for proxy requests before reaching WSGI

    - by Alejandro Mezcua
    I have a Django app running under Apache2 and mod_wsgi and, unfortunately, lots of requests trying to use the server as a proxy. The server is responding OK with 404 errors but the errors are generated by the Django (WSGI) app, which causes a high CPU usage. If I turn off the app and let Apache handle the response directly (send a 404), the CPU usage drops to almost 0 (mod_proxy is not enabled). Is there a way to configure Apache to respond directly to this kind of requests with an error before the request hits the WSGI app? I have seen that maybe mod_security would be an option, but I'd like to know if I can do it without it. EDIT. I'll explain it a bit more. In the logs I have lots of connections trying to use the server as a web proxy (e.g. connections like GET http://zzz.zzz/ HTTP/1.1 where zzz.zzz is an external domain, not mine). This requests are passed on to mod_wsgi which then return a 404 (as per my Django app). If I disable the app, as mod_proxy is disabled, Apache returns the error directly. What I'd finally like to do is prevent Apache from passing the request to the WSGI for invalid domains, that is, if the request is a proxy request, directly return the error and not execute the WSGI app. EDIT2. Here is the apache2 config, using VirtualHosts files in sites-enabled (i have removed email addresses and changed IPs to xxx, change the server alias to sample.sample.xxx). What I'd like is for Apache to reject any request that doesn't go to sample.sample.xxx with and error, that is, accept only relative requests to the server or fully qualified only to the actual ServerAlias. default: <VirtualHost *:80> ServerAdmin [email protected] ServerName X.X.X.X ServerAlias X.X.X.X DocumentRoot /var/www/default <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> ErrorDocument 404 "404" ErrorDocument 403 "403" ErrorDocument 500 "500" ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> actual host: <VirtualHost *:80> ErrorDocument 404 "404" ErrorDocument 403 "403" ErrorDocument 500 "500" WSGIScriptAlias / /var/www/sample.sample.xxx/django.wsgi ServerAdmin [email protected] ServerAlias sample.sample.xxx ServerName sample.sample.xxx CustomLog /var/www/sample.sample.xxx/log/sample.sample.xxx-access.log combined Alias /robots.txt /var/www/sample.sample.xxx/static/robots.txt Alias /favicon.ico /var/www/sample.sample.xxx/static/favicon.ico AliasMatch ^/([^/]*\.css) /var/www/sample.sample.xxx/static/$1 Alias /static/ /var/www/sample.sample.xxx/static/ Alias /media/ /var/www/sample.sample.xxx/media/ <Directory /var/www/sample.sample.xxx/static/> Order deny,allow Allow from all </Directory> <Directory /var/www/sample.sample.xxx/media/> Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • Using OpenVPN, yet netflix.com blocks access

    - by user837848
    I have set up an OpenVPN server on a VPS in the USA and configured it to route all clients traffic through it. Everything seems to work fine regarding the VPN connection in gerneral. All ip lookup sites show me the us server's ip address and even hulu.com works(it won't work if you are not in the usa). But for some reason netflix.com says "Sorry, Netflix is not available in your country yet.". So I thought that netflix probably uses some more sophisticated ways to determine your location beyond just your ip address. But I could not find a way to get it to work until I dropped the idea of using a VPN and instead connected to the server via a simple socks tunnel with ssh by running: ssh -D 9999 user@serverip All I had to do was changing the key network.proxy.socks_remote_dns in Firefox from false to true to prevent DNS leaks and setting up the socks proxy. Then I could finally watch netflix.com. As a result I concluded that there is nothing in the browser(or something like system timezone) that tells netflix the location, so it has to have something to do with the OpenVPN config. After that I used tcpdump to log all the traffic on the server's network interface venet0 (OpenVZ VPS), visited netflix.com on the client while first connected to the VPN and then connected via socks tunnel and afterwards compared both outputs. The only thing that caught my eye was that while using the socks tunnel the server mainly used ipv6 to connect to netflix whereas it only used ipv4 when the client was connected to the OpenVPN server. But I don't get how that could make such a difference. So what am I missing? Is there a way to configure OpenVPN to also use ipv6 to connect to a website although there is only an ipv4 connection between the VPS and the client? Here is the server.conf of the OpenVPN server (OpenVZ VPS) local serverip port 443 proto tcp dev tun ca ./easy-rsa2/keys/ca.crt cert ./easy-rsa2/keys/vps1.crt key ./easy-rsa2/keys/vps1.key # This file should be kept secret dh ./easy-rsa2/keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" client-to-client keepalive 10 120 tls-auth ta.key 0 # This file is secret cipher AES-256-CBC comp-lzo max-clients 4 user nobody group nogroup persist-key persist-tun status openvpn-status.log log-append openvpn.log verb 3 iptables forwarding iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j SNAT --to-source serverip (enabled ipv4 forwarding) I have tried everything always on a Win7 and a Debian client with only ipv4 connections and always made sure that they use the correct DNS server (tested with ipleak.net and tcpdump / wireshark). client.conf: client dev tun proto tcp remote serverip 443 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server tls-auth ta.key 1 cipher AES-256-CBC comb-lzo verb 3

    Read the article

  • How can I prevent cron from filling up my syslog?

    - by user7321
    I have a script which needs to be executed each minute. The problem is that cron is logging to /var/log/syslog each time it executes. I end up seeing something like this repeated over and over in /var/log/syslog- Jun 25 00:56:01 myhostname /USR/SBIN/CRON[1144]: (root) CMD (php /path/to/script.php /dev/null) btw- i'm using debian My questions is- Is there any way I can tell cron not write this information to syslog every time?

    Read the article

  • Deploying Django App with Nginx, Apache, mod_wsgi

    - by JCWong
    I have a django app which can run locally using the standard development environment. I want to now move this to EC2 for production. The django documentation suggests running with apache and mod_wsgi, and using nginx for loading static files. I am running Ubuntu 12.04 on an Ec2 box. My Django app, "ddt", contains a subdirectory "apache" with ddt.wsgi import os, sys apache_configuration= os.path.dirname(__file__) project = os.path.dirname(apache_configuration) workspace = os.path.dirname(project) sys.path.append(workspace) sys.path.append('/usr/lib/python2.7/site-packages/django/') sys.path.append('/home/jeffrey/www/ddt/') os.environ['DJANGO_SETTINGS_MODULE'] = 'ddt.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() I have mod_wsgi installed from apt. My apache/httpd.conf contains NameVirtualHost *:8080 WSGIScriptAlias / /home/jeffrey/www/ddt/apache/ddt.wsgi WSGIPythonPath /home/jeffrey/www/ddt <Directory /home/jeffrey/www/ddt/apache/> <Files ddt.wsgi> Order deny,allow Allow from all </Files> </Directory> Under apache2/sites-enabled <VirtualHost *:8080> ServerName www.mysite.com ServerAlias mysite.com <Directory /home/jeffrey/www/ddt/apache/> Order deny,allow Allow from all </Directory> LogLevel warn ErrorLog /home/jeffrey/www/ddt/logs/apache_error.log CustomLog /home/jeffrey/www/ddt/logs/apache_access.log combined WSGIDaemonProcess datadriventrading.com user=www-data group=www-data threads=25 WSGIProcessGroup datadriventrading.com WSGIScriptAlias / /home/jeffrey/www/ddt/apache/ddt.wsgi </VirtualHost> If I am correct, these 3 files above should correctly allow my django app to run on port 8080. I have the following nginx/proxy.conf file proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; Under nginx/sites-enabled server { listen 80; server_name www.mysite.com mysite.com; access_log /home/jeffrey/www/ddt/logs/nginx_access.log; error_log /home/jeffrey/www/ddt/logs/nginx_error.log; location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } location /media/ { root /home/jeffrey/www/ddt/; } } If I am correct these two files should setup nginx to take requests on the HTTP port 80, but then direct requests to apache which is running the django app on port 8080. If i go to mysite.com, all I see is Welcome to Nginx! Any advice for how to debug this?

    Read the article

  • Apache/Jboss Issue - is this connection timeout?

    - by user115391
    We have an application. The architecture is as below 1 load balancer (apache), which redirects to 2 app servers (jboss). The site is working fine and I am able to access it fine. But sometimes, randomly the homepage takes a while (like 30-40 secs) to load. I tried checking the logs but could not figure out why. I used the httptraffic analyzer, fiddler to see the traffic, but it just says the request/response took 30 secs or so. I checked the apache access logs, mod_jk.log. My configurations are below mod-jk.conf LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/mod_jk.log #JkLogLevel info #JkLogLevel debug JkLogLevel error # Select the log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories JkRequestLogFormat "%w %V %T %P %{tid}P %D" JkMount /__application__/* loadbalancer JkUnMount /__application__/images/* loadbalancer <VirtualHost *:8080 > JkMountFile conf/uriworkermap.properties </VirtualHost> JkShmFile run/jk.shm <Location /jkstatus> JkMount status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ----------------------------- uriworkermap.properties Simple worker configuration file # Mount the Servlet context to the ajp13 worker /=loadbalancer /*=loadbalancer ----------------------------- workers.properties worker.list=loadbalancer,status worker.template.port=8009 worker.template.type=ajp13 worker.template.lbfactor=1 worker.template.prepost_timeout=10000 worker.template.connect_timeout=10000 worker.template.ping_mode=A worker.worker1.reference=worker.template worker.worker1.host=hostname1 worker.worker2.reference=worker.template worker.worker2.host=hostname2 worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=worker1,worker2 worker.status.type=status ----------------------------- my jboss server.xml - $JBOSS_HOME/server/default/deploy/jbossweb.sar/server.xml --------------------------------- The logs from access log is below The issue where it took time - look at the seconds column [23/Mar/2012:12:10:38 -0400] "GET / HTTP/1.1" 200 138 x.x.x.x - - [23/Mar/2012:12:10:49 -0400] "GET /index.jsp HTTP/1.1" 302 - x.x.x.x - - [23/Mar/2012:12:11:10 -0400] "GET /home.jsp HTTP/1.1" 200 936 x.x.x.x - - [23/Mar/2012:12:11:31 -0400] "POST /login/ HTTP/1.1" 200 8895 x.x.x.x - - [23/Mar/2012:12:11:52 -0400] "GET /login/includes/login-style.css HTTP/1.1" 304 - The one after the issue x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET / HTTP/1.1" 200 138 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /index.jsp HTTP/1.1" 302 - x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /home.jsp HTTP/1.1" 200 936 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "POST /login/ HTTP/1.1" 200 8895 x.x.x.x - - [23/Mar/2012:12:12:18 -0400] "GET /login/includes/login-style.css HTTP/1.1" 304 - Would it be a cache or timeout issue? Any help is appreciated. Thanks.

    Read the article

  • Can't connect to samba

    - by Rick
    Windows 7, connecting to Samba shares I have a follow up question from the link above. I am running Samba 3.0.23d on FreeBSD is release 7.1 I changed the policies as described above but still cannot connect to the samba server with the windows 7 or a server 2008. I feel it is a problem with recognizing the new machines on the network. the windows machines can see the samba server, but cannot connect to it or view any of the files. After changing the security policies the samba server asked for network id and password but would not allow the machine to connect, said they were unknown username or bad password. Here is my current config file. there is no sign of encryption anywhere, should I just add the line? not sure what that would do elsewhere. Workgroup = WWOFFSET server string = WWO File Server (%v) security = server username map = /usr/local/etc/smb.users hosts allow = 10. 127. # If you want to automatically load your printer list rather # than setting them up individually then you'll need this ; load printers = yes # you may wish to override the location of the printcap file ; printcap name = /etc/printcap # on SystemV system setting printcap name to lpstat should allow # you to automatically obtain a printer list from the SystemV spool # system ; printcap name = lpstat # It should not be necessary to specify the print system type unless # it is non-standard. Currently supported print systems include: # bsd, cups, sysv, plp, lprng, aix, hpux, qnx ; printing = cups # Uncomment this if you want a guest account, you must add this to /etc/passwd # otherwise the user "nobody" is used ; guest account = pcguest # this tells Samba to use a separate log file for each machine # that connects log file = /var/log/samba/log.%m # Put a capping on the size of the log files (in Kb). max log size = 50 # Use password server option only with security = server # The argument list may include: # password server = My_PDC_Name [My_BDC_Name] [My_Next_BDC_Name] # or to auto-locate the domain controller/s # password server = * ; password server = <NT-Server-Name> password server = SERVER0 # Use the realm option only with security = ads # Specifies the Active Directory realm the host is part of ; realm = MY_REALM # Backend to store user information in. New installations should # use either tdbsam or ldapsam. smbpasswd is available for backwards # compatibility. tdbsam requires no further configuration. ; passdb backend = tdbsam ; passdb backend = smbpasswd # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting. # Note: Consider carefully the location in the configuration file of # this line. The included file is read at that point. ; include = /usr/local/etc/smb.conf.%m # Most people will find that this option gives better performance. # See the chapter 'Samba performance issues' in the Samba HOWTO Collection # and the manual pages for details. # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 socket options = TCP_NODELAY # Configure Samba to use multiple interfaces # If you have multiple network interfaces then you must list them # here. See the man page for details. ; interfaces = 192.168.12.2/24 192.168.13.2/24 # Browser Control Options: # set local master to no if you don't want Samba to become a master # browser on your network. Otherwise the normal election rules apply ; local master = no # OS Level determines the precedence of this server in master browser # elections. The default value should be reasonable ; os level = 33 # Domain Master specifies Samba to be the Domain Master Browser. This # allows Samba to collate browse lists between subnets. Don't use this # if you already have a Windows NT domain controller doing this job ; domain master = yes # Preferred Master causes Samba to force a local browser election on startup # and gives it a slightly higher chance of winning the election ; preferred master = yes # Enable this if you want Samba to be a domain logon server for # Windows95 workstations. ; domain logons = yes # if you enable domain logons then you may want a per-machine or # per user logon script # run a specific logon batch file per workstation (machine) ; logon script = %m.bat # run a specific logon batch file per username ; logon script = %U.bat # Where to store roving profiles (only for Win95 and WinNT) # %L substitutes for this servers netbios name, %U is username # You must uncomment the [Profiles] share below ; logon path = \\%L\Profiles\%U # Windows Internet Name Serving Support Section: # WINS Support - Tells the NMBD component of Samba to enable it's WINS Server ; wins support = yes # WINS Server - Tells the NMBD components of Samba to be a WINS Client # Note: Samba can be either a WINS Server, or a WINS Client, but NOT both ; wins server = w.x.y.z # WINS Proxy - Tells Samba to answer name resolution queries on # behalf of a non WINS capable client, for this to work there must be # at least one WINS Server on the network. The default is NO. ; wins proxy = yes # DNS Proxy - tells Samba whether or not to try to resolve NetBIOS names # via DNS nslookups. The default is NO. dns proxy = no # charset settings ; display charset = ASCII ; unix charset = ASCII ; dos charset = ASCII # These scripts are used on a domain controller or stand-alone # machine to add or delete corresponding unix accounts ; add user script = /usr/sbin/useradd %u ; add group script = /usr/sbin/groupadd %g ; add machine script = /usr/sbin/adduser -n -g machines -c Machine -d /dev/null -s /bin/false %u ; delete user script = /usr/sbin/userdel %u ; delete user from group script = /usr/sbin/deluser %u %g ; delete group script = /usr/sbin/groupdel %g unix extensions = no

    Read the article

  • Windows 7 Forbid connecting to a specific wireless network

    - by Elliot
    Does anyone know how to tell windows 7 to bar a wireless network? It keeps logging into a random open one with no bandwidth my neighbors have instead of the good one we have here. I keep unchecking "automatically log in if available" and it keeps re-checking itself. I want it to NEVER log into this network no matter what without manual intervention. I do not want to disable auto connecting, just tell it "do not ever connect to this one without my express permission".

    Read the article

  • Permission denied while reading upstream

    - by user68613
    We have deployed our rails application on on nginx and passenger.Intermittently pages of application get loaded partially.There is no error in application log.But nginx error log shows the following : 2011/02/14 05:49:34 [crit] 25389#0: *645 open() "/opt/nginx/proxy_temp/2/02/0000000022" failed (13: Permission denied) while reading upstream, client: x.x.x.x, server: y.y.y.y, request: "GET /signup/procedures?count=0 HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "y.y.y.y", referrer: "http://y.y.y.y/signup/procedures"

    Read the article

  • RHEL5: Can't create sparse file bigger than 256GB in tmpfs

    - by John Kugelman
    /var/log/lastlog gets written to when you log in. The size of this file is based off of the largest UID in the system. The larger the maximum UID, the larger this file is. Thankfully it's a sparse file so the size on disk is much smaller than the size ls reports (ls -s reports the size on disk). On our system we're authenticating against an Active Directory server, and the UIDs users are assigned end up being really, really large. Like, say, UID 900,000,000 for the first AD user, 900,000,001 for the second, etc. That's strange but should be okay. It results in /var/log/lastlog being huuuuuge, though--once an AD user logs in lastlog shows up as 280GB. Its real size is still small, thankfully. This works fine when /var/log/lastlog is stored on the hard drive on an ext3 filesystem. It breaks, however, if lastlog is stored in a tmpfs filesystem. Then it appears that the max file size for any file on the tmpfs is 256GB, so the sessreg program errors out trying to write to lastlog. Where is this 256GB limit coming from, and how can I increase it? As a simple test for creating large sparse files I've been doing: dd if=/dev/zero of=sparse-file bs=1 count=1 seek=300GB I've tried Googling for "tmpfs max file size", "256GB filesystem limit", "linux max file size", things like that. I haven't been able to find much. The only mention of 256GB I can find is that ext3 filesystems with 2KB blocks are limited to 256GB files. But our hard drives are formatted with 4K blocks so that doesn't seem to be it--not to mention this is happening in a tmpfs mounted ON TOP of the hard drive so the ext3 partition shouldn't be a factor. This is all happening on a 64-bit Red Hat Enterprise Linux 5.4 system. Interestingly, on my personal development machine, which is a 32-bit Fedora Core 6 box, I can create 300GB+ files in tmpfs filesystems no problem. On the RHEL5.4 systems it is no go.

    Read the article

  • Logging in as the same user multiple times in Remote Desktop Windows Server 2008

    - by Little_Johnn
    Quick question: I have a situation where I need to let multiple people on different PCs log into one server 2008 machine as administrator simultaneously over remote desktop. I have the CALs for it, it's just not set up correctly. When one user tries to log in, it boots the other out. What I need is to present to them a different session, just each as logged in as admin. Sorry for the slightly rambling post, I'm new here. Thanks!

    Read the article

  • Using Windows XP Mode Virtual Machine on a Domain

    - by DavidStein
    I've followed the instructions and installed and configured the Windows Virtual PC XP Mode. I've added it as a machine on the network and can log into it and use network resources. In the process I removed the saved credentials, which are just a default local login to the VM. I need to know how to set this so that the VM auto logs in with the domain account credentials used to log into the physical computer. Google hasn't helped me find the answer.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >