Search Results

Search found 2455 results on 99 pages for 'dbcontrol certificate expire'.

Page 74/99 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Cannot log into Oracle Enterprise Manager 11g: ORA-28001

    - by Álvaro G. Vicario
    I can no longer log into Oracle Enterprise Manager 11g. I get this error message: ORA-28001: the password has expired (DBD ERROR: OCISessionBegin) I could log into the server using SQL*Plus. I warned me that the password was going to expire in 7 days (which is not the same as being already expired). Following advice from several documents, I ran these commands from SQL*Plus: ALTER USER sys IDENTIFIED BY new_password; ALTER USER system IDENTIFIED BY new_password; SQL*Plus no longer warns about passwords, but I still cannot use the Enterprise Manager. Then I followed this to remove password expiration: ALTER PROFILE default LIMIT password_life_time UNLIMITED And I've also restarted the Oracle services. In case it was using cached credentials, I've tried to connect from several browsers in several computers. No way: I still get ORA-28001 in Enterprise Manager. What am I missing? Update: Some more info SQL> select username,ACCOUNT_STATUS,EXPIRY_DATE from dba_users; USERNAME ACCOUNT_STATUS EXPIRY_D ------------------------------ -------------------------------- -------- MGMT_VIEW OPEN SYS OPEN SYSTEM OPEN

    Read the article

  • Why do I get "Permission denied (publickey)" when trying to SSH from local Ubuntu to a Amazon EC2 server?

    - by Vorleak Chy
    I have an instance of an application running in the cloud on Amazon EC2 instance, and I need to connect it from my local Ubuntu. It works fine on one of local ubuntu and also laptop. I got message "Permission denied (publickey)" when trying to access SSH to EC2 on another local Ubuntu. It's so strange to me. I'm thinking some sort of problems with security settings on the Amazon EC2 which has limited IPs access to one instance or certificate may need to regenerate. Does anyone know a solution?

    Read the article

  • How to setup and manage a shared hosting server on Windows Server 2008 R2 Web Edition?

    - by Motivated Student
    Background I am a newbie in using Windows Server 2008 R2 Web Edition (and other editions as well). I have a static IP, a very fast internet connection, a server (PRIMERGY TX100 S1 Server) and Windows Server 2008 R2 Web Edition (trial version). The objective is to setup the server to be a shared hosting server such that each of my friends has a private account to manage his/her domain. to upload his/her web content to the server using the encrypted ftp. to manage database administration. to manage Certificate. etc Questions Is there a good reference to learn "how to setup and manage a shared hosting server on Windows Server 2008 R2" ? What are the rough steps I have to do to accomplish my objective?

    Read the article

  • Could not establish a secure connection to server with safari

    - by pharno
    Safari tells me that it couldnt open the page, because it couldnt establish a secure connection to the server. However, other browsers (opera, firefox) can open the page. Also, theres nothing in the apache error log. The certificate is selfsigned, and uses standart values. (seen here: http://www.knaupes.net/tutorial-ssl-zertifikat-selbst-erstellen-und-signieren/ ) ssl config: SSLEngine on #SSLInsecureRenegotiation on SSLCertificateFile /home/gemeinde/certs/selfsigned/gemeinde.crt SSLCertificateKeyFile /home/gemeinde/certs/selfsigned/gemeinde.key #SSLCACertificateFile /home/gemeinde/certs/Platinum_G2.pem #SSLOptions +StdEnvVars <Location "/"> SSLOptions +StdEnvVars +OptRenegotiate SSLVerifyClient optional SSLVerifyDepth 10 </Location>

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • How to utilize Varnish for A/B Testing and Feature Rollout?

    - by Ken
    Hi all, wasn't really sure if this should go here on or stackoverlow - admins, please move if i'm mistaken (and sorry). Today we have our web layer exposed to the world. We would like to add Varnish in front of our web layer to accelerate the site and reduce calls to the backend. However, we have some concerns and i was wondering how most people approach them: A/B Testing - How do you test two "versions" of each page and compare? I mean, how does varnish know which page to serve up? If and how do you save seperate versions on each page? Feature rollout - how would you set up a simple feature rollout mechanism? Let's say i want to open a new feature/page to just 10% of the traffic.. and then later increase that to 20%? How do you handle code deployments? Do you purge your entire varnish cache every deployment? (We have deployments on a daily basis). Or do you just let it slowly expire (using TTL)? Any ideas and examples regarding these issues is greatly appreciated! Thanks in advance. Ken.

    Read the article

  • Copy seleted row to another worksheet

    - by ???? ???????
    There are for ex. 10 rows in one worksheet. When user clicks on one row it should be presented on another worksheet. Is it possible? Any help to do it? EDIT: To clarify: In one sheet are presented for example student exam marks on first year: John 10 8 10 7 Nick 8 9 8 9 Maria 7 8 8 7 On 2nd sheet there are student informations on the second year: John 9 9 10 8 Nick 8 8 9 7 Maria 7 6 8 8 I want to have give some kind of final certificate for student so summary information should be presented on the third sheet. I doesn't need to be on click. There could be drop down list on the third sheet.

    Read the article

  • Why does httpd handle requests for wrong hostnames in SSL mode?

    - by Manuel
    I have an SSL-enabled virtual host for my sites at example.com:10443 Listen 10443 <VirtualHost _default_:10443> ServerName example.com:10443 ServerAdmin [email protected] ErrorLog "/var/log/httpd/error_log" TransferLog "/var/log/httpd/access_log" SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLCertificateFile "/etc/ssl/private/example.com.crt" SSLCertificateKeyFile "/etc/ssl/private/example.com.key" SSLCertificateChainFile "/etc/ssl/private/sub.class1.server.ca.pem" SSLCACertificateFile "/etc/ssl/private/StartCom.pem" </VirtualHost> Browsing to https://example.com:10443/ works as expected. However, also browsing to https://subdomain.example.com:10443/ (with DNS set) shows me the same pages (after SSL certificate warning). I would have expected the directive ServerName example.com:10443 to reject all connection attempts to other server names. How can I tell the virtual host not to serve requests for URLs other than the top-level one?

    Read the article

  • Shibboleth: found encrypted assertions, but no CredentialResolver was available

    - by HorusKol
    I've gotten a Shibboleth Server Provider (SP) up and running, and I'm using the TestShib Identity Provider (IdP) for testing. The configuration appears to be all correct, and when I requested my secured directory I was sent to the IdP where I logged in and then was sent back to https://example.org/Shibboleth.sso/SAML2/POST where I am getting a generic error message. Checking the logs, I am told: found encrypted assertions, but no CredentialResolver was available I have rechecked the configuration, and there I have: <CredentialResolver type="File" key="/etc/shibboleth/sp-key.pem" certificate="/etc/shibboleth/sp-cert.pem"/> Both of these files are present at those locations. I've restarted apache and retried, but still get the same error. I don't know if it makes a difference - but only a subdirectory of the site has been secured - the documentroot is publicly available.

    Read the article

  • ServerRoot in my lighttpd.conf

    - by michael
    Hi, I have use the following example lighttpd.conf to launch my lighttpd. Can you please tell me where is my 'ServerRoot'? # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) server.port = 9090 ## bind to localhost (default: all interfaces) server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => 1026, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", #"docroot" => "/" # remote server may use # it's own docroot )) ) ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.socket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 Thank you.

    Read the article

  • Plesk 10 Postfix with multiple IP adresses and SSL certificates

    - by JulianB
    We are currently running a root server with Debian 6 and Plesk 10.4.4. We have some virtual hosts using one IP adress (shared) - e.g. example1.com - and another virtual host using a dedicated IP address (example2.com). Is there a way to configure postfix to do the following Always use the IP address of the virtual host to which the e-mail account belongs (so that an e-mail from [email protected] will originate from the shared IP-Address and an e-mail from [email protected] will originate from the dedicated IP? Use different certificates for TLS for example1.com and example2.com? If the latter is not possible: Could any problems arrive when using example1.com as certificate for example2.com users? Of course, example2.com users would have to configure their clients to use example1.com as the SMTP server name to avoid annoying security warnings. But if we still would be able to get the effect of the first point that would still be acceptable.

    Read the article

  • Firefox https problem with localhost

    - by vnuk
    I administer half a dozen servers with (among other things) Webmin. I connect to Webmin via ssh tunnel to port 10000. All of my Webmins run in https mode. Firefox from version 3.6.6. refuses to load my https://localhost:10000 pages claiming SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) Why is this problem NOW? It was working fine (annoying with certificate errors, but working) but now it is not working at all. I must have Google Chrome installed so I can connect to Webin.

    Read the article

  • DNS zone file SPF configuration to support sending mail from multiple servers and gmail

    - by Tauren
    I want to configure SPF on a domain to allow mail to be sent from: the x.com website server (x.com and www.x.com - both at same IP) it's MX servers (smtp.x.com, mx.x.com, mail.x.com) another server that isn't listed as an MX server (somehost.x.com) via gmail using an account that has authenticated use of [email protected] Will this zone file work? If not, what are the problems with it? $ttl 38400 @ IN SOA ns1.x.com. hostmaster.x.com. ( 201003092 ; serial 8H ; refresh 15M ; retry 1W ; expire 1H ) ; minimum @ NS ns1.x.com. @ NS ns2.x.com. @ MX 10 mx.x.com. @ MX 20 smtp.x.com. @ MX 30 mailhost.x.com. ; SPF records @ IN TXT "v=spf1 a mx a:somehost.x.com include:_spf.google.com ~all" mx IN TXT "v=spf1 a -all" smtp IN TXT "v=spf1 a -all" mailhost IN TXT "v=spf1 a -all" Questions: Is _spf.google.com the right thing to include for gmail.com, or is it only for Google Hosted Apps? If only for Google Apps, what should I include to send from gmail.com? If mail shouldn't be sent from anywhere else, is it safe to use -all instead of ~all? Does it make sense to add specific SPF records for each of the mail servers? Any other problems with the zone file? I want to confirm these things before making changes to my zone file. The file has SPF configured basically the same now, just without google.com and somehost, but I want to make sure I won't break things when I change it.

    Read the article

  • My IIS server won't serve SSL sites to some browsers

    - by sbleon
    (Update: This is now cross-posted at http://stackoverflow.com/questions/3355000. This is the more appropriate forum, but StackOverflow gets a lot more traffic.) I've got an IIS 6.0 server that won't serve pages over SSL to some browsers. In Webkit-based browsers on OS X 10.6, I can't load pages at all. In MSIE 8 on Windows XP SP3, I can load pages, but it will sometimes hang downloading images or sending POSTs. Working: Firefox 3.6 (OS X + Windows) Chrome (Windows) Partially Working: MSIE 8 (works sometimes, but hangs up, especially on POSTs) Not Working: Chrome 5 (OS X) Safari 5 (OS X) Mobile Safari (iOS 4) On OS X (the easiest platform for me to test on), Chrome and Firefox both negotiate the same TLS Cipher, but Chrome hangs on or after the post-negotiation handshake. Chrome packet capture (via ssldump): 1 1 0.0485 (0.0485) C>S Handshake ClientHello Version 3.1 cipher suites Unknown value 0xc00a Unknown value 0xc009 Unknown value 0xc007 Unknown value 0xc008 Unknown value 0xc013 Unknown value 0xc014 Unknown value 0xc011 Unknown value 0xc012 Unknown value 0xc004 Unknown value 0xc005 Unknown value 0xc002 Unknown value 0xc003 Unknown value 0xc00e Unknown value 0xc00f Unknown value 0xc00c Unknown value 0xc00d Unknown value 0x2f TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 Unknown value 0x35 TLS_RSA_WITH_3DES_EDE_CBC_SHA Unknown value 0x32 Unknown value 0x33 Unknown value 0x38 Unknown value 0x39 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA compression methods NULL 1 2 0.3106 (0.2620) S>C Handshake ServerHello Version 3.1 session_id[32]= bb 0e 00 00 7a 7e 07 50 5e 78 48 cf 43 5a f7 4d d2 ed 72 8f ff 1d 9e 74 66 74 03 b3 bb 92 8d eb cipherSuite TLS_RSA_WITH_RC4_128_MD5 compressionMethod NULL Certificate ServerHelloDone 1 3 0.3196 (0.0090) C>S Handshake ClientKeyExchange 1 4 0.3197 (0.0000) C>S ChangeCipherSpec 1 5 0.3197 (0.0000) C>S Handshake [hang, no more data transmitted] Firefox packet capture: 1 1 0.0485 (0.0485) C>S Handshake ClientHello Version 3.1 resume [32]= 14 03 00 00 4e 28 de aa da 7a 25 87 25 32 f3 a7 ae 4c 2d a0 e4 57 cc dd d7 0e d7 82 19 f7 8f b9 cipher suites Unknown value 0xff Unknown value 0xc00a Unknown value 0xc014 Unknown value 0x88 Unknown value 0x87 Unknown value 0x39 Unknown value 0x38 Unknown value 0xc00f Unknown value 0xc005 Unknown value 0x84 Unknown value 0x35 Unknown value 0xc007 Unknown value 0xc009 Unknown value 0xc011 Unknown value 0xc013 Unknown value 0x45 Unknown value 0x44 Unknown value 0x33 Unknown value 0x32 Unknown value 0xc00c Unknown value 0xc00e Unknown value 0xc002 Unknown value 0xc004 Unknown value 0x96 Unknown value 0x41 TLS_RSA_WITH_RC4_128_MD5 TLS_RSA_WITH_RC4_128_SHA Unknown value 0x2f Unknown value 0xc008 Unknown value 0xc012 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA Unknown value 0xc00d Unknown value 0xc003 Unknown value 0xfeff TLS_RSA_WITH_3DES_EDE_CBC_SHA compression methods NULL 1 2 0.0983 (0.0497) S>C Handshake ServerHello Version 3.1 session_id[32]= 14 03 00 00 4e 28 de aa da 7a 25 87 25 32 f3 a7 ae 4c 2d a0 e4 57 cc dd d7 0e d7 82 19 f7 8f b9 cipherSuite TLS_RSA_WITH_RC4_128_MD5 compressionMethod NULL 1 3 0.0983 (0.0000) S>C ChangeCipherSpec 1 4 0.0983 (0.0000) S>C Handshake 1 5 0.1019 (0.0035) C>S ChangeCipherSpec 1 6 0.1019 (0.0000) C>S Handshake 1 7 0.1019 (0.0000) C>S application_data 1 8 0.2460 (0.1440) S>C application_data 1 9 0.3108 (0.0648) S>C application_data 1 10 0.3650 (0.0542) S>C application_data 1 11 0.4188 (0.0537) S>C application_data 1 12 0.4580 (0.0392) S>C application_data 1 13 0.4831 (0.0251) S>C application_data [etc] Update: Here's a Wireshark capture from the server end. What's going on with those two much-delayed RST packets? Is that just IIS terminating what it perceives as a non-responsive connection? 19 10.129450 67.249.xxx.xxx 10.100.xxx.xx TCP 50653 > https [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=3 TSV=699250189 TSER=0 20 10.129517 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50653 [SYN, ACK] Seq=0 Ack=1 Win=16384 Len=0 MSS=1460 WS=0 TSV=0 TSER=0 21 10.168596 67.249.xxx.xxx 10.100.xxx.xx TCP 50653 > https [ACK] Seq=1 Ack=1 Win=524280 Len=0 TSV=699250189 TSER=0 22 10.172950 67.249.xxx.xxx 10.100.xxx.xx TLSv1 Client Hello 23 10.173267 10.100.xxx.xx 67.249.xxx.xxx TCP [TCP segment of a reassembled PDU] 24 10.173297 10.100.xxx.xx 67.249.xxx.xxx TCP [TCP segment of a reassembled PDU] 25 10.385180 67.249.xxx.xxx 10.100.xxx.xx TCP 50653 > https [ACK] Seq=148 Ack=2897 Win=524280 Len=0 TSV=699250191 TSER=163006 26 10.385235 10.100.xxx.xx 67.249.xxx.xxx TLSv1 Server Hello, Certificate, Server Hello Done 27 10.424682 67.249.xxx.xxx 10.100.xxx.xx TCP 50653 > https [ACK] Seq=148 Ack=4215 Win=524280 Len=0 TSV=699250192 TSER=163008 28 10.435245 67.249.xxx.xxx 10.100.xxx.xx TLSv1 Client Key Exchange 29 10.438522 67.249.xxx.xxx 10.100.xxx.xx TLSv1 Change Cipher Spec 30 10.438553 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50653 [ACK] Seq=4215 Ack=421 Win=65115 Len=0 TSV=163008 TSER=699250192 31 10.449036 67.249.xxx.xxx 10.100.xxx.xx TLSv1 Encrypted Handshake Message 32 10.580652 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50653 [ACK] Seq=4215 Ack=458 Win=65078 Len=0 TSV=163010 TSER=699250192 7312 57.315338 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50644 [RST, ACK] Seq=1 Ack=1 Win=0 Len=0 19531 142.316425 10.100.xxx.xx 67.249.xxx.xxx TCP https > 50653 [RST, ACK] Seq=4215 Ack=458 Win=0 Len=0

    Read the article

  • Windows product key is valid but wont activate

    - by pnongrata
    Last month, I needed to install Windows XP (Pro Version 2002 SP3) from a Reinstallation CD a co-worker gave me, and with a product key the IT team told me to use. Everything installed successfully and I have been using the XP machine for the last 30 days without any problems; however it kept reminding me to activate Windows, and of course, I never did (laziness). It now has me locked out of my machine and won't let me log in until I activate it. So I proceed to the Activation Screen which asks me: Do you want to activate Windows now? I choose "Yes, let's activate Windows over the Internet now.", and click the Next button. It now asks me: Do you want to register while you are activating Windows? I choose "No, I don't want to register now; let's just activate Windows.", and click the Next button. I now see the following screen: Notice how the title reads "Unauthorized product key", and how there are only 3 buttons: Telephone Remind me later Retry Please note that the Retry button is disabled until I enter the full product key that IT gave me, then it enables. However, at no point in time do I see a Next button, indicating that the product key was valid/successful. So instead, I just click the Retry button, and the screen refreshes, this time with a different title Incorrect product key Could something be wrong with the Windows XP reinstallation CD (do they "expire" after a certain amount of time, etc.)? Or is this the normal/typical workflow for what happens when you just have a bad product key? I ask because, after this happened I emailed IT and they supplied me whether several other product keys to try. But every time its the same result, same thing happening over again and again. So I guess it's possible that IT has given me several bad keys, but it's more likely something else is going on here. Any thoughts or ways to troubleshoot? Thanks in advance!

    Read the article

  • LDAP over SSL/TLS working for everything but login on Ubuntu

    - by Oliver Nelson
    I have gotten OpenLDAP with SSL working on a test box with a signed certificate. I can use an LDAP tool on a Windows box to view the LDAP over SSL (port 636). But when I run dpkg-reconfigure ldap-auth-config to setup my local login to use ldaps, my login under a username in the directory doesn't work. If I change the config to use just plain ldap (port 389) it works just fine (I can login under a username in the directory). When its setup for ldaps I get Auth.log shows: Sep 5 13:48:27 boromir sshd[13453]: pam_ldap: ldap_simple_bind Can't contact LDAP server Sep 5 13:48:27 boromir sshd[13453]: pam_ldap: reconnecting to LDAP server... Sep 5 13:48:27 boromir sshd[13453]: pam_ldap: ldap_simple_bind Can't contact LDAP server I will provide whatever are needed. I'm not sure what else to include. Thanx for any insights... OLIVER

    Read the article

  • bind9 dns proxy

    - by Zulakis
    We are offering multiple SSL-enabled services in our local network. To avoid certificate-warnings we bought certificates for server.ourdomain.tld and firewall.ourdomain.tld. We now created a zone in our local DNS-server in which we pointed the hosts to the corresponding private-ips. Now, each time another record for ourdomain.tld, like for example www.ourdomain.tld or alike are changed, we need to update it on both our public-dns-server AND the local dns-server. I would like our local bind-dns to serve all the information from our public-dns but serve different information for these 2 hosts. I know I could possibly have our private-ips in our public-dns but I don't want that for security reasons. The internet dns-server is being managed by a third party, while we have full control of the intranet one. Because of this I am looking for a solution which lets the intranet retrieve the records from the internet one.

    Read the article

  • Should windows services be created with custom users, or should I use one of LocalSystem/LocalServic

    - by Justin Dearing
    I'm asking the question in general for the average custom developed NT service or unix OSS daemon ported to windows with SCM support. However, at the moment my immediate concern is for mongodb. From my experience with UNIX I like all my services to run as different unprivileged users. The way this has translated to windows is as follows: Create a local (or domain if it has to talk to SQL server) windows user with a long random password (lately an ASCII85 encoded guid generated from a different machine). Set it to next expire and forbid it from changing its password. Remove that user from the "Users Group". Grant that user "Login as a Service" permission. Give it read permission to the folder where the app resides, and write permission to the logs and data files the applications use. Assign the user to the service. Troubleshoot until the service starts. My feeling is that the unprivileged users are less powerful than the 3 special service users. I also feel that by isolating which users run which services, I would limit collateral damage if a way to compromise one service was found.

    Read the article

  • racoon-tool doesn't generate full racoon.conf file in /var/lib/racoon/racoon.conf

    - by robthewolf
    I am using ipsec-tools/racoon to create my VPN. I am using racoon-tool to configure racoon.conf but when I run racoon-tool reload it only generates the first section - Global items. When I run racoon-tool I get: # racoon-tool reload Loading SAD and SPD... SAD and SPD loaded. Configuring racoon...done. This is the entire file /var/lib/racoon/racoon.conf # # Racoon configuration for Samuel # Generated on Wed Jan 5 21:31:49 2011 by racoon-tool # # # Global items # path pre_shared_key "/etc/racoon/psk.txt"; path certificate "/etc/racoon/certs"; log debug; I cannot find anywhere a solution as to why this is happening. Please help

    Read the article

  • Options for EC2 ec2-create-snapshot and family

    - by shabda
    I am trying to use the various tools provided by ec2-ami-tools Eg, ec2-create-snapshot -h .... -K, --private-key KEY Specify KEY as the private key to use. Defaults to the value of the EC2_PRIVATE_KEY environment variable (if set). Overrides the default. -C, --cert CERT Specify CERT as the X509 certificate to use. Defaults to the value of the EC2_CERT environment variable (if set). Overrides the default. -K and -C are two required values, and I cant understand what values are these expecting? If I create a Keypair from Elasticfox, I get only one file to download and a fingerprint. So which of this need to get where?

    Read the article

  • Where did this incorrect cached DNS lookup come from?

    - by Stephen Jennings
    Somehow, I've been having a chronic issue where my computer will get an invalid DNS lookup in its cache for either of the two Exchange servers I use from Mail.app. My workplace runs one of the Exchange servers and I run the other (they are totally unrelated, hosted by different companies, etc.). The problem manifests as a certificate domain error. When it happens, I can run nslookup mail.mydomain.com and I see the incorrect IP address (usually owned by either Apple or Akamai), but if I run nslookup mail.mydomain.com 8.8.8.8, I get the correct address. My real quest is to find out why this keeps happening, and to do that, I'd like to know which server is supplying me this bad DNS entry. Is there a way to check my DNS cache to see where this bad lookup came from?

    Read the article

  • Two Tomcat SSL Providers & One FreeBSD

    - by mosg
    Hello everyone. Question: On FreeBSD8 I need to have two opened HTTPS different ports (443 and 444, for example). In other words, I need two providers, working simultaneously: Ordinary SSL signed certificate (# Thawte) on 443 port Special russian security provider (# DIGTProvider, based on CryptoPro CSP software) on 444 port I also have to mentioned, that the major provider is the 2'nd provider. Here is some of DIGTProvider options: add to ${JRE_HOME}/lib/security/java.security this line security.provider.N=com.digt.trusted.jce.provider.DIGTProvider ssl.SocketFactory.provider=com.digt.trusted.jsse.provider.DigtSocketFactory uncomment and edit in conf/server.xml HTTPS section: sslProtocol="GostTLS" (added) edit bin/catalina.sh and add: export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/opt/cprocsp/lib/ia32" export JAVA_OPTS="${JAVA_OPTS} -Dcom.digt.trusted.jsse.server.certFile=/home//server-gost.cer -Dcom.digt.trusted.jsse.server.keyPasswd=11111111" As I know if I just define in server.xml tomcat's configuration file two SSL connectors, tomcat would not start, because in JRE you can use only one JSSE provider. Thanks for help.

    Read the article

  • IIS 6.0 subdomains with host headers and non existent subdomains

    - by Mustafakidd
    Hey Everyone - We have a wildcard A-Record pointing to our IP and have a number of sites running on IIS 6 with host headers and have a a wildcard SSL certificate for the domain so that each site can run under SSL. For example: https://A.foo.com https:/B.foo.com https:/C.foo.com Everything is working well but I noticed that when we type a non existent subdomain, say D.foo.com, it redirects to A.foo.com. Any idea why that is or how I can change that? I think we may have set up the A.foo.com site before we applied the wildcard A-record with our domain provider and before we had set up the SSL cert. Thanks.

    Read the article

  • CNAME rule being ignored

    - by Ben
    On a server with Plesk installed I have added a CNAME rule pointing from one of the sites subdomains to an external website. I have checked the named configuration for that domain name and it shows the CNAME however the sub domain just points to the default server page and ignores the CNAME rule. Named has been restarted and I've also run the rvmng reconfigure-vhost command. I edited another server to test this, on cPanel, and it works fine. The conf file for the domain: ; *** Ts file is automatically generated by Plesk *** $TTL 86400 @ IN SOA ns.example.com. cf.example1.com. ( 1292946742 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 10800 ) ; Minimum example.com. IN NS ns.example.com. ns.example.com. IN A xx.xxx.xxx.xx example.com. IN A xx.xxx.xxx.xx webmail.example.com. IN A xx.xxx.xxx.xx mail.example.com. IN A xx.xxx.xxx.xx beta.example.com. IN A xx.xxx.xxx.xx ftp.example.com. IN CNAME example.com. www.example.com. IN CNAME example.com. login.example.com. IN CNAME socialize.gigya.com. example.com. IN MX 10 webmail.example.com. You can see the CNAME rule in the file but it just gets ignored? Thanks in advance for any help.

    Read the article

  • Dummy/default page for apache

    - by Ency
    I'm trying to set up default page for my apache2, for following cases: User is accessing http://IP_Address instead of hostname Requested protocol (HTTP/HTTPS) is not available (eg. only http*s*://domain.com exists) Currently I've got something like that <VirtualHost eserver:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/local/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> </VirtualHost> I think, it works well, i'm trying to do similar thing for HTTPS, but it does not work. <VirtualHost eserver:443> SSLCertificateKeyFile /etc/apache2/ssl/dummy.key SSLCertificateFile /etc/apache2/ssl/dummy.crt SSLProtocol all SSLCipherSuite HIGH:MEDIUM ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn ServerSignature Off </VirtualHost> My default is places in sites-enabled as a first one 000-default I do not care about not certificate validity during accessing default page, my goal is not show different HTTPS page if user one of points is applied

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >