Search Results

Search found 60311 results on 2413 pages for 'john webb(at)oracle com'.

Page 345/2413 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • Enabling Samba Shares Across Subnets

    - by John
    I was curious how I could go about setting up SAMBA so that shares could be seen and used across different subnets. We have some Linux devices that are bound to Active Directory and we would like to have them serve SAMBA shares to clients that will reside in a different subnet than what the servers reside in? Is there any way to do this without needing to setup a WINS server or use legacy NetBIOS methods since the majority of our clients are Windows 7, Windows Server 2003, Windows Server 2008, and Macintosh OS X (10.6 or newer)? EDIT Right now, only clients in the same subnet as the SAMBA server can see the shares. Clients outside of the subnet (i.e. the client subnet) cannot see or connect to the share. The error returned is: The specified network name is no longer available. It does not seem to matter if I use IP, FQDN, or NetBIOS name to try and connect to the share with. We have a common Cisco router handling the inter-subnet routing. Everything else seems to work correctly with this network setup and the device can be pinged from multiple subnets. I also do not believe it to be a firewall type of issue since the rules for this segment are rather lax.

    Read the article

  • Issue resolving names on Hyper-V guest with Routing and Remote Access

    - by John Sheehan
    I've got a Win2k8 standard server running Hyper-V with a Server 2003 web guest instance running. The host is publicly available on the internet. I've created an Internal Private network in the Hyper-V Virtual Network manager. I've set the host IP for that virtual adapter to 192.168.0.1. I've set the IP on the guest to 192.168.0.2. They can ping each other and share files. I can't browse the web on the guest though. NSLOOKUPs are working. I've tried setting the DNS server setting on the guest to 192.168.0.1 and something external like Google's 8.8.8.8 server to no avail. Windows firewall is disabled on the internal virtual network. I've tried it with both DNS installed on the host and without it. I'm not sure which RRAS/NAT settings are relevant to pass on so ask if you need me to clarify anything. How do I get outbound internet working on the guest VM?

    Read the article

  • Batch file reads in text file and replaces quotes with variables into new file?

    - by John
    I have customer Pizza lists that i have saved into 5 seperate txt files from my database in this format: Filename = 25Percent.TXT "555-1211" "555-1212" "555-1223" ... ect Each list is a phone number in quotes and each list varies in length. Part 1: I have two sets of variables that i would like to replace with each quote in the 5 text files. The two variables would be like: Var A = < Discount Pizza price for phone number is " Var B = " 25 % So i would like to run a batch file that reads each line in the text file and writes into another text file the following, replacing the quotes with the variables: New Filename = 25Percentfinished.TXT < Discount Pizza price for phone number is "555-1211" 25 % < Discount Pizza price for phone number is "555-1212" 25 % < Discount Pizza price for phone number is "555-1223" 25 % Then I would repeat for 30percent.txt, then 35percent.txt, 40percent.txt, and then finally 50percent.txt file. Part II: I would also like to append the 5 new files together, with the append command? I am assuming that the SET Var command would also be used? Not sure what to do here.

    Read the article

  • How to fix subfolders IIS7 functionality?

    - by Amr ElGarhy
    I have a problem in my sharing hosting that all websites in subfolders, their URL appear like this: http://amrelgarhy.com/amrelgarhy/ I sent to godaddy, and they sent me that its because of IIS7 and they can't solve, any one can tell me how to fix that? Here what i sent to godaddy and their reply: "as i saw before on this page http://www.godaddy.com/gdshop/hosting/shared.asp?ci=9009 compare windows plans, "Multiple Web sites: unlimited" so i have the right to run more than one website inside my hosting. But what i am facing now that i can't make more than website as a primary website. I have igurr.com as a primary website, i want to make others as primary because: I am facing a problem that all home pages for the other websites "which physically in sub folders" are like that "http://amrelgarhy.com/amrelgarhy/" the URL + the folder name and that what i don't want." GODADDY "Thank you for contacting Hosting Support. The behavior you are describing is standard for IIS 7.0 accounts. All alias domains in this environment will append the foldername their located in. I.E. a an alias domain www.coolexample.com pointed to the '/example' directory will display in a browser as "www.coolexample.com/example". This is due to the way IIS 7.0 handles virtual directories. Unfortunately we do not have any direct work around for this. We apologize for any inconvenience this may cause. "

    Read the article

  • Directly reading a LTO tape drive

    - by John
    On our server (M$ 2003) is it possible to directly read our LTO 4 tape drive and copy the entire ntbackup created bkf file on it to an external hard disk? (Is the tape backup even stored on a tape as a bkf file, I’m going off when we only used external usb HD’s.)

    Read the article

  • OS X can't copy files from Windows Home Server...over wifi.

    - by John Clayton
    I'm a brand spanking new user of OS X, coming from a lifetime of Windows use. I've been setting up my new Macbook Pro and have run into a very unusual problem. Over wifi, I am unable to copy files to or from my Windows Home Server. The problem seems to exist only over wifi, and only to WHS. Here are the details of my setup: 2010 Macbook Pro (Core i7), OS X 10.6.3 Windows Home Server PP3 (virtualized in XenServer 5.5) Windows 7 Ultimate x64 desktop Windows 7 Ultimate x64 in Boot Camp D-Link DIR-655 wireless N router Here is what I've done to narrow down the problem: Files copy fine from WHS to OS X when using gigabit ethernet Files copy fine from desktop to OS X when using gigabit ethernet Files fail to copy from WHS to OS X when using wifi (error -51) Files copy fine from desktop to OS X when using wifi Files copy fine from WHS to Boot Camp when using wifi Files copy fine from desktop to Boot Camp when using wifi From what I can tell, it seems to be some sort of issue between OS X and WHS, but I can't for the life of me see what would be different between shares on WHS and my desktop. They are both connected using smb://ADDRESS (I've tried both by IP and name). I can browse the shares on the WHS, but copying to OS X fails. I originally found the issue while installing VS2010 off an ISO from WHS, mounted to a Windows 7 VM using VMware Fusion. During the installation the VM was unusable - even the clock got behind the host be about 8 minutes. Once I plugged in the ethernet and disabled the wifi things picked up and finished quickly. The Fusion 3.1 RC is the only I think of that I installed that may have messed with the wifi driver. I've also tried resetting the wifi router, and have changed it from being G & N to N-only. Under Boot Camp I get similar speeds as my wife's N laptop. Any ideas? Thanks!

    Read the article

  • Nginx .zip files return 404

    - by Kenley Tomlin
    I have set up Nginx as a reverse proxy for Node and to serve my static files and user uploaded images. Everything is working beautifully except that I can't understand why Nginx can't find my .zip files. Here is my nginx.conf. user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; proxy_cache_path /var/www/web_cache levels=1:2 keys_zone=ooparoopaweb_cache:8m max_size=1000m inactive=600m; sendfile on; upstream *******_node { server 172.27.198.66:8888 max_fails=3 fail_timeout=20s; #fair weight_mode=idle no_rr } upstream ******_json_node { server 172.27.176.57:3300 max_fails=3 fail_timeout=20s; } server { #REDIRECT ALL HTTP REQUESTS FOR FRONT-END SITE TO HTTPS listen 80; server_name *******.com www.******.com; return 301 https://$host$request_uri; } server { #MOBILE APPLICATION PROXY TO NODE JSON listen 3300 ssl; ssl_certificate /*****/*******/json_ssl/server.crt; ssl_certificate_key /*****/******/json_ssl/server.key; server_name json.*******.com; location / { proxy_pass http://******_json_node; proxy_redirect off; proxy_set_header Host $host ; proxy_set_header X-Real-IP $remote_addr ; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ; proxy_set_header X-Forwarded-Proto https; client_max_body_size 20m; client_body_buffer_size 128k; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffers 32 4k; } } server { #******.COM FRONT-END SITE PROXY TO NODE WEB SERVER listen 443 ssl; ssl_certificate /***/***/web_ssl/********.crt; ssl_certificate_key /****/*****/web_ssl/myserver.key; server_name mydomain.com www.mydomain.com; add_header Strict-Transport-Security max-age=500; location / { gzip on; gzip_types text/html text/css application/json application/x-javascript; proxy_pass http://mydomain_node; proxy_redirect off; proxy_set_header Host $host ; proxy_set_header X-Real-IP $remote_addr ; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ; proxy_set_header X-Forwarded-Proto https; client_max_body_size 20m; client_body_buffer_size 128k; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffers 32 4k; } } server { #ADMIN SITE PROXY TO NODE BACK-END listen 80; server_name admin.mydomain.com; location / { proxy_pass http://mydomain_node; proxy_redirect off; proxy_set_header Host $host ; proxy_set_header X-Real-IP $remote_addr ; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ; client_max_body_size 20m; client_body_buffer_size 128k; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffers 32 4k; } } server { # SERVES STATIC FILES listen 80; listen 443 ssl; ssl_certificate /**/*****/server.crt; ssl_certificate_key /****/******/server.key; server_name static.domain.com; access_log static.domain.access.log; root /var/www/mystatic/; location ~*\.(jpeg|jpg|png|ico)$ { gzip on; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/rss+xml text/javascript image/svg+xml application/vnd.ms-fontobject application/x-font-ttf font/opentype image/png image/jpeg application/zip; expires 10d; add_header Cache-Control public; } location ~*\.zip { #internal; add_header Content-Type "application/zip"; add_header Content-Disposition "attachment; filename=gamezip.zip"; } } } include tcp.conf; Tcp.conf contains settings that allow Nginx to proxy websockets. I don't believe anything contained within it is relevant to this question. I also want to add that I want the zip files to be a forced download.

    Read the article

  • Google Website Optimizer not tracking conversions any more.

    - by Pickledegg
    In a nutshell my split tests aren't tracking conversions at all. My A/B pages are on http://www.mydomain.com, and my conversion page is the last stage of my shopping cart on https://secure.mydomain.com. I thought the most concise way of explaining this would be to post my page source code: http://pastebin.com/ru7dCDqD To summarize, the pages are being displayed correctly in my test report, but no conversions are being tracked.

    Read the article

  • Googlebot repeatedly looks for files that aren't on my server

    - by John at CashCommons
    I'm hosting a site for a volunteer organization. I've moved the site to WordPress, but it wasn't always that way. I suspect at one point it was hacked badly. My Apache error log file has grown to 122 kB in just the past 18 hours. The large majority of the errors logged are of this form -- it's repeated hundreds of times today alone in my log files: [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/calendar.php [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/404.shtml (I verified that xx.xxx.xx.xxx was a Google server.) I suspect there was a security hole somewhere before, likely in calendar.php, that was exploited. The files don't exist anymore, but there may be many backlinks that exist that reference here, hence why googlebot is so interested in crawling them. How do I fix this gracefully? I still would like Google to index the site. I just want to tell it somehow not to look for these files anymore.

    Read the article

  • Cannot get git working

    - by Devin Dixon
    I'm trying to install my own git server with these instructions. http://cisight.com/how-to-setup-git-server-using-gitolite-in-ubuntu-11-10-oneiric/ But I am get stuck at this point. git clone --verbose [email protected]:testing.git Cloning into 'testing'... Permission denied (publickey). fatal: The remote end hung up unexpectedly And I think it has something to do with this: gitolite@ip-xxxx:~$ gl-setup tmp/john.pub key_read: uudecode Aklkdfgkldkgldkgldkgfdlkgldkgdlfkgldkgldkgdlkgkfdnknbkdnbkdnbkdnbkfnbkdfnbkdnfbkdfnbdknbkdnbkfnbkdbnkdbnkdfnbkd john@example.com failed fprint failed I always get the fail and I think its preventing me from cloning repo.The repo is there along with gitolite-admin.git repo. The permissions are this: drwxr-x--- 8 gitolite gitolite 4096 Jun 6 16:29 gitolite-admin.git drwxr-x--- 7 gitolite gitolite 4096 Jun 6 16:29 testing.git So my question is what am I missing here?

    Read the article

  • How to control admission policy in vmware HA?

    - by John
    Simple question, I have 3 hosts running 4.1 Essentials Plus with vmware HA. I tried to create several virtual machines that filled 90% of each server's memory capacity. I know that vmware has really sophisticated memory management within virtual machines, but I do not understand how the vCenter can allow me even to power on the virtual machines that exceed the critical memory level, when the host failover can be still handled. Is it due to the fact that virtual machines does not use the memory, so that it is still considered as free, so virtual machines can be powered on ? But what would happen if all VMs would be really using the RAM before the host failure - they could not be migrated to other hosts after the failure. The default behaviour in XenServer is that, it automatically calculates the maximum memory level that can be used within the cluster so that the host failure is still protected. Vmware does the same thing ? The admission policy is enabled. Vmware HA enabled.

    Read the article

  • How to upgrade the hard drive in a MacBook Pro?

    - by John McC
    I have a MacBook Pro with Snow Leopard that I want to upgrade to a new larger hard drive. I also have a current Time Machine backup on an external USB drive and an external SATA case I that I can put a 2.5" drive in. What's the best procedure for transferring the existing installation to the new drive?

    Read the article

  • iRedMail home setup - use different SMTP relay for different destination domains

    - by John
    Hello helpful server folks, I'm messing with iRedMail. I've mostly been successful, I think I have an SMTP problem. I have changed RoundCube (webmail) to use BrightHouse's, my ISP's, SMTP server for outgoing. It works fine, I click send and poof, I have gmail. I can reply from gmail to my email server, and it works. It took 10 hours for the email to show up, which is a different problem, I think, but it does work. But when I send from my server TO my own server, my ISP's Postmaster account sends me a cryptic blurb. I just got off the phone with them, and they say it "should work", and that they can't reach my pop3 server. (pop3, pop3s, imap, and imaps are all open on my router and forwarded to the server, I'm not sure what I need, I'm just covering my bases...) pop3 and/or imap as external interfaces are just formalities, I really just want webmail to work. Roundcube only takes one SMTP server in its configs. How can I configure Postfix to relay / forward emails to my ISP's SMTP, while taking messages bound for my own domain and processing them? Since my ISP won't let me "bounce" my emails off of it. Maybe I'm vastly misunderstanding how e-mail works in general: To receive mail, I should only need port 25, SMTP, open to the internet, correct? Should I be concerned about some authentication failure from the outside to my relay? (My relay requires user/pass to use, my ISP's requires none.)

    Read the article

  • openldap search acl

    - by Patrick
    I'm trying to write an access control for OpenLDAP to allow a user to search with a certain base dn, but only get results back from certain sub dn's. I've played with lots of different rules but cant get it to work. I'm not sure its even possible. For example: I have the user with the dn uid=testuser,ou=people,dc=example,dc=com. I want this user to be able to search with a base of dc=example,dc=com and get back entries in ou=people,dc=example,dc=com. There are lots of other sub OUs under dc=example,dc=com, but only entries in ou=people should be returned (for bonus, I'd only like certain attributes to be returned as well). Can this be done?

    Read the article

  • Freenas 8 email setup

    - by atrueresistance
    I'm struggling with setting up email reporting in Freenas. My build is FreeNAS-8.0.4-RELEASE-x64 (10351). I have my IPv4 Default gateway set to 192.168.2.1 (my router) and Nameserver 1 as 8.8.8.8 (google's public). Under my email tab I have from email ***@gmail.com outgoing mail server smtp.google.com port to connect to 465 tls/ssl SSL use smtp auth checked username ***@gmail.com password **** I then went into accounts and changed the root email to ***@gmail.com. When I try and send a test email, I get Your test email could not be sent: timed out So what am I doing wrong?

    Read the article

  • "tasksel: aptitude failed (100)" installing LAMP on Ubuntu (vagrant)

    - by John Mee
    vagrant@precise64:~$ sudo tasksel install lamp-server tasksel: aptitude failed (100) I'm hoping to get a LAMP server from a clean install of ubuntu. All the googles suggest tasksel, which going into a pretty ascii gui, downloads stuff, then dies with this message. I'm guessing I should try doing it via apt-get of the individual packages but seems a fair chance somebody else is going to search for this error message and want the right answer.

    Read the article

  • How to disable SSL 2.0 on IIS 7.5?

    - by John Hoge
    I've seen this KB Article which Microsoft put out covering how to remove SSL 2.0 on IIS 7.0 and earlier, but I can't find anything advising on how to do the same on IIS 7.5. The registry keys mentioned on that KB article are no longer in the registry.

    Read the article

  • SSLCipherSuite - disable weak encryption, cbc cipher and md5 based algorithm

    - by John
    A developer recently ran a PCI Scan with TripWire against our LAMP server. They identified several issues and instructed the following to correct the issues: Problem: SSL Server Supports Weak Encryption for SSLv3, TLSv1, Solution: Add the following rule to httpd.conf SSLCipherSuite ALL:!aNULL:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM Problem: SSL Server Supports CBC Ciphers for SSLv3, TLSv1 Solution: Disable any cipher suites using CBC ciphers Problem: SSL Server Supports Weak MAC Algorithm for SSLv3, TLSv1 Solution: Disable any cipher suites using MD5 based MAC algorithms I tried searching google for a comprehensive tutorial on how to construct an SSLCipherSuite directive to meet my requirements, but I didn't find anything I could understand. I see examples of SSLCipherSuite directives, but I need an explanation on what each component of the directive does. So even in the directive SSLCipherSuite ALL:!aNULL:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM, I dont understand for example what the !LOW means. Can someone either a) tell me the SSLCipherSuite directive that will meet my needs or b) show me a resource that clearly explains each segment of a SSLCipherSuite is and how to construct one?

    Read the article

  • Apache + Codeigniter + New Server + Unexpected Errors

    - by ngl5000
    Alright here is the situation: I use to have my codeigniter site at bluehost were I did not have root access, I have since moved that site to rackspace. I have not changed any of the PHP code yet there has been some unexpected behavior. Unexpected Behavior: http://mysite.com/robots.txt Both old and new resolve to the robots file http://mysite.com/robots.txt/ The old bluehost setup resolves to my codeigniter 404 error page. The rackspace config resolves to: Not Found The requested URL /robots.txt/ was not found on this server. **This instance leads me to believe that there could be a problem with my mod rewrites or lack there of. The first one produces the error correctly through php while it seems the second senario lets the server handle this error. The next instance of this problem is even more troubling: 'http://mysite.com/search/term/9 x 1-1%2F2 white/' New site results in: Bad Request Your browser sent a request that this server could not understand. Old site results in: The actual page being loaded and the search term being unencoded. I have to assume that this has something to do with the fact that when I went to the new server I went from root level htaccess file to httpd.conf file and virtual server default and default-ssl. Here they are: Default file: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / # force no www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Default-ssl File <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / RewriteCond %{SERVER_PORT} !^443 RewriteRule ^ https://mysite.com%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # Use our self-signed certificate by default SSLCertificateFile /etc/apache2/ssl/certs/www.mysite.com.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.mysite.com.key # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. # SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem # SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown httpd.conf File Just a lot of stuff from html5 boiler plate, I will post it if need be Old htaccess file <IfModule mod_rewrite.c> # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)/$ /$1 [r=301,L] # codeigniter direct RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> Any Help would be hugely appreciated!!

    Read the article

  • mod_rewrite for selectors with .html

    - by user1720607
    We have a website where the URL looks something like, www.example.com/about.smart.html ( "smart" being selector added on the app server based on the useragent if its a smart phone device) We need to redirect the page to 404 if the URL is changed by the user as like below: www.example.com/about.abc.xyz.smart.html www.example.com/about.smart.abc.html I tried with the below rule, but this redirects to 404 only for 1) and not for 2) RewriteCond %{REQUEST_URI} !^(.*)(-)\.html$ RewriteRule (.*)\.(.*).smart.html$ - [R=404,L] Any pointers on this would be of great help.

    Read the article

  • Qmail does not forward mail to a specific domain

    - by jahufar
    Hi I have a dedicated hosting account with GoDaddy.com. I've pointed my domain's email to work with Google apps. The server has qmail running and it forwards email to all domains just fine except for MY domain (mydomain.com) - it says 550 User xxx not found in mydomain.com I believe it thinks I've hosted email on the server itself (not gmail) and it's trying to validate if xxx@mydomain.com exists on my server (which is not the case since it's all handled by google apps). How do I make it forward mail to all domains? Thank you :) EDIT: I would only need it forwarding emails if the connection originates from 127.0.0.1 - which I believe is the default way it's configured. So to clarify: I just need a purely forwarded configuration so my PHP scripts have the ability to send email.

    Read the article

  • Gateway GT5220 Boot/POST Failure

    - by John Rudy
    I have a Gateway GT5220 I'm troubleshooting. It is, in fact, the machine I just gave my father for his birthday a couple months ago. (Prior to that, it was my home PC. My home PC is now the MacBook on which I'm writing this.) Before going any further, I suspect that the answer will be, "It's worse than that, it's dead, Jim, it's dead, Jim, it's dead, Jim." At least, mobo and/or CPU. The initial symptoms were as follows: Turn on power All fans fire up (thus making it so I can't hear if the hard drive is spinning or not, nor are my hands sensitive enough anymore to feel it) No LEDs remained lit on the front panel. (Initially, the hard drive indicator flashed briefly.) No beep, no video, no nothing. Following some advice I found here, I tried to "drain the stored power." After following those steps, the new symptoms were: Turn on power All fans fire up The front panel LEDs remained lit! After about 20, maybe 30 seconds, we had video! Sort of. We got to the Gateway splash/POST screen, which appeared thoroughly corrupted. How corrupted? Well, I imagine it's what a POST screen would look like after reading the wrong passage out of the Necronomicon: It stayed there. I gave it at least 5, maybe 6 minutes, and it didn't move. So I shut her down, started her up again, and now (this is where we currently stand, symptomatically) we have this: Turn on power All fans fire up The front panel LEDs remain lit No video, no beep, no nothing. I'm a software guy; haven't done real hardware troubleshooting in years. My gut tells me that the mobo and/or CPU is fried, and unfortunately my gut didn't get to be as big as it is being wrong all the time. :( In addition to the link above, I have read all of the following (trying to save you some LMGTFY trouble): Gateway Support POST Error Messages and Handling About a zillion (useless) POST beep code sites A kioskea.net post indicating that most likely we're at what I consider "total loss" (mobo and/or CPU) My questions: Are there any conditions other than mobo/CPU that could cause symptoms like these? Is it worth my time to try the next hardware troubleshooting step?(IE, remove all non-critical hardware from the machine, try to boot, systematically replace one by one until we find the failing component) Which mobos will fit in the Gateway GT5220 case (with rear ports correctly aligned)? (Why this is not a dupe: I wouldn't have posted this question if it hadn't been for the funkadelic possessed video display on the one occasion we got video out. I think that justified this not being an exact dupe. Of course, if the community overrules, I will understand.)

    Read the article

  • How can I get WAMP and a domain name to work on a non-standard port?

    - by David Murdoch
    I have read countless articles on setting up a domain on WAMP to listen on a port other than 80; none of them are working. I've got Windows Server 2008 (Standard) with IIS 7 installed and running on port 80 (and 443). I've got WAMP installed with the following configuration. Listen 81 ServerName sub.example.com:81 DocumentRoot "C:/Path/To/www" <Directory "C:/Path/To/www"> Options All MultiViews AllowOverride All # onlineoffline tag - don't remove Order Allow,Deny Allow from all </Directory> localhost:81 works with the above configuration but sub.example.com:81 does not. Just to make sure my firewall wasn't getting in the way I have disabled it completely. My sub.example.com domain is already pointing to my server and works on IIS on port 80. Also, if I disable IIS and change the Apache port from 81 to 80 it works. Yes, I am restarting Apache after each httpd.conf change. :-) I don't need any other domain (or sub domains [I don't even care about localhost]) configured which is why I'm not using a VirtualHost. Any ideas what is going on here? What could I be doing wrong? Update Changing Listen to 80 but keeping ServerName as sub.example.com:81 causes navigation to sub.example.com:80 to work; this just doesn't seem right to me. Could ServerName be ignoring the :port part somehow? netstat -a -n | find "TCP": >netstat -a -n | find "TCP" TCP 0.0.0.0:81 0.0.0.0:0 LISTENING TCP 0.0.0.0:135 0.0.0.0:0 LISTENING TCP 0.0.0.0:445 0.0.0.0:0 LISTENING TCP 0.0.0.0:912 0.0.0.0:0 LISTENING ... TCP 127.0.0.1:81 127.0.0.1:49709 TIME_WAIT ...

    Read the article

  • help with sendmail configuration to send mail through my gmail account?

    - by pradeepa
    This is the sendmail.ini file what to change now # Example for a user configuration file # Set default values for all following accounts. defaults logfile "\xampp\sendmail\sendmail.log" # Mercury account Mercury host localhost from postmaster@localhost auth off # A freemail service example account gmail tls on tls_certcheck off host smtp.gmail.com from ****@gmail.com auth on user ****@gmail.com password ******* # Set a default account account default : Mercury

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >