Search Results

Search found 12836 results on 514 pages for 'isp org'.

Page 142/514 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • How do I check a reverse PTR record?

    - by Daisetsu
    I need to check a reverse PTR record to make sure that a script I have is sending emails which will actually received by my users and not incorrectly marked as spam. I understand that the ISP which owns the IP range has to set up the PTR record, but how do I check if it is already set up?

    Read the article

  • Guacamole on KVM

    - by Siem Hermans
    I currently run a deployment where I provide virtual machines to circa 25 users through ESXi over the VMWare WSX web portal. This works, no doubt about it, it is fast, stable and reliable enough for the end users. However I stumbled across the Guacamole project (Link: http://guac-dev.org/) and the KVM project (Link: http://www.linux-kvm.org/). I must say I am no genius when it comes to Linux but I am interested in replacing the ESXi and WSX combination with a Guacamole and KVM deployment. I have seen people across the internet use ESXi in combination with Guacamole (mostly prior to the release of WSX), but I have yet to see someone use it in conjunction with KVM. Considering my amount of knowledge on Linux in general I would like to ask: Is it possible at all to combine the two?

    Read the article

  • generate correctly a self signed certificate Zimbra

    - by rkmax
    I have a Single mail server with Zimbra 8.0.0 for generate certificate I'm following Generate the cert. ORG=MyOrganization CN=mail.mydomain.com COUNTRY=myCountry CITY=myCity /opt/zimbra/bin/zmcertmgr createcrt -new -days 365 -subject "/C=$COUNTRY/ST=N/A/L=$CITY/O=$ORG/OU=ZCS/CN=$CN" /opt/zimbra/bin/zmcertmgr deploycrt self -allserver su - zimbra "zmcontrol restart" Veririficate with /opt/zimbra/bin/zmcertmgr viewdeployedcrt. i can see the new cert In Chrome go to https://mail.mydomain.com and export the .cer test in a Windows client certutil.exe -addstore root \path\to\exported.cert root "Root Certification Authorities trusted" You can add a root certificate to the root store CertUtil:-addstore command error: 0x8007000d (WIN32: 13) CertUtil: Invalid data. even from chrome i've tried to add the cert without successful results. can anyone help me with this problem?

    Read the article

  • Setting up Virtual Hosts with Apache on Windows 2008 server for multiple sites. Complicated setup,

    - by Roeland
    Hey guys! I am setting up apache on my windows 2008 server at my home. It will serve 2 functions. Subversion hosting to allow me and some others to manage company documents with version control Local website hosting for web development. Will need to run several websites since I generally work on more then one site at a time. Heres what I have done so far. I set up subversion and apache 2.2 using some walk troughs. I changed the default port to 1337. (im a nerd) Using dyndns.com I created a domain to forward to my home ip which is dynamic. ( company.gotdns.org) I then went into my DNS for my company.com and added a record to point repo.company.com to company.gotdns.org At this point people who need access to my file repository can access by going to repo.company.com/repo which is good so far. My question comes on the next step, setting up virtual hosts with apache. Ideally I would like to have my local website be viewable by some others in the company from their homes. So, say I am working on site1, I would like to have them be able to view this by going site1.roeland.bythepixel.com. At the same time, I would like to have site10.wouter.bythepixel.com go to his local setup for site10. What I have done for this: I went into my DNS for company.com and added a record to point roeland.company.com to company.gotdns.org (which translates to my ip). I added code to my httpd-vhosts.conf (listed at bottom) I added code to my host file (listed at bottom) Hah, so of course this doenst work as excepted.. going to site1.roeland.bythepixel.com doesnt bring up my test1 site. Could anyone point me where I may be going wrong? Thanks! hosts: 127.0.0.1 localhost 127.0.0.1 sensenich.roeland.bythepixel.com ::1 localhost httpd-vhosts.conf: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "F:/Current Projects/sensenich.com" ServerName sensenich.roeland.bythepixel.com ErrorLog "logs/sensenich.roeland.bythepixel.com-error.log" CustomLog "logs/sensenich.roeland.bythepixel.com-access.log" common </VirtualHost>

    Read the article

  • Exception while bamboo is checking for changes in svn

    - by tangens
    While configuring a new test plan in Atlassian bamboo, I get the following exception in the log: Recording error: Unable to detect changes : <projectname> Caused by: org.tmatesoft.svn.core.SVNException: svn: Unrecognized SSL message, plaintext connection? svn: OPTIONS request failed on <repo-path> I tried to access the svn repository by hand and it worked (URL looks like this: https://myrepo.org/path/to/repo). The svn sits behind an apache http server. Both the bamboo server and the svn server are running Debian 4.0. Has anybody seen this error before and can give me a hint how to fix it?

    Read the article

  • Anyone else being hit by traffic on TCP port 11370

    - by Jakub
    I've been watching my logs (Ubuntu 9.10 server) and dunno about any of you but I am getting a ton of traffic from sources like Russia, Romania, etc.. on port 11370 (my iptables are logndrop'ing it. But was just curious). Some googling revealed this info: http://www.keysigning.org/sks/ -which seems to use port 11370 & 11371 Could that be the service they are scanning for (i don't run it)? ICS shows this: https://isc.incidents.org/port.html?port=11370 Just curious what you guys think and if anyone has seen this before? If need be I can post my log on here, but its just a dropped log of TCP port 11370 from various IPs. Thought it was strange as thats the ONLY Port I seem to repeatedly be hit on (from logs). I'm running on a Linode (VPS) if that matters to annyone.

    Read the article

  • X-Window Modeline for high resolution modes on NEC LCD 22WV monitor

    - by Jakub Narebski
    What should look like X Window System (X.Org) modeline to be put in xorg.conf, for high resolution (incuding recommended 1680x1050 @ 50Hz resolution) for 22" NEC LCD 22WV monitor? X.Org autodetect correctly only 800x600 and lower resolution SVGA modes, unfortunately. How can I generate proper "Modeline" line for xorg.conf? Is information included in NEC_Datasheet_LCD22WV-english.pdf enough (found on NEC LCD 22WV product info page)? What tools there are available to generate proper modeline for a LCD monitor for Linux? MS Windows (MS Windows XP Home) correctly detects and use 1680x1050 resolution; can I somehow get modeline information from MS Windows?

    Read the article

  • Issue with user having a gmail account and Google Apps account using same email/username

    - by Joshua
    Greetings! We have a user in our organization that had been using her email address at our domain at her username for gmail.com We recently moved our folks on to Google Apps, and have just moved our email server over to Google (IMAP/SMTP). I'm having all kinds of trouble only with this user's account with her sending and receiving email via the new Google mail server and wondering if it's because of her existing Gmail account. So her email address with us is xxx@domain.org, which is her login/user id with us on our Google Apps site. She still has her gmail identity tied to that same xxx@domain.org. She's ok with deleting her old Gmail account...if I do so however will it goof things up for her going forward with us on the Google Apps site? Will she not be able to receive email? Thanks! Joshua

    Read the article

  • Nginx access log shows authenticated user "admin"

    - by bearcat
    I came across a line in my Nginx access log: 218.201.121.99 - admin [12/Dec/2012:18:33:18 +0800] "GET /manager/html HTTP/1.1" 444 0 "-" "-" Let me stress that there is only 1 record with this IP. Notice the authenticated user admin. After some googling, I was able to find out only that this is authenticated user (http://wiki.nginx.org/HttpCoreModule#.24remote_user), which was authenticated by the Auth Basic Module (http://wiki.nginx.org/HttpAuthBasicModule). However, nowhere in my site (configuration) do I use HTTP basic authentication. What is going on? How did it get there? Was the user authenticated?

    Read the article

  • sendmail: how can I restrict access to clients that only have a valid certificate?

    - by lxg
    I want to reject all connections that don't present a valid SSL/TLS certificate. First of all is the access db file the correct one to be changing? I have already tried using the basic rule given in the documentation CertIssuer:/C=US/ST=California/O=endmail.org/OU=private/CN= Darth+20Mail+20+28Cert+29/Email=darth+2Bcert@endmail.org RELAY This will obviously need a rule afterward to filter and reject all that don't present the cert? Does anyone have any pointers as to what syntax I should use? wildcards? lxg

    Read the article

  • Mikrotik and NAT/Routing issue

    - by arul
    I have basic NAT/Routing problem with Mikrotik RB750 that I've been unable to solve over the past days. From our ISP we have 26 IP addresses: 10.10.10.192/27, with 10.10.10.193 being the gateway and 10.10.10.194 the first available IP. What I need is that everything connected to ether2 gets a public IP from the DHCP server, and everything connected to ether3 gets a local IP from another DHCP (192.168.100.0/24). All clients should have internet access (I'll figure out bandwidth throttling later) and optimally just 'see' each other (all boxes are Win7, I guess this can ultimately be handled with VPN). Here is my setup: ether1 (10.10.10.194) is connected directly to ISP. 20 clients connected to ether2(10.10.10.195), and another 20 to ether3(10.10.10.196) (both through same 24 port switches). This is my setup, which doesn't work, all 20 clients from ether2 can access the internet, though all comm. seems to come from 10.10.10.194 (is this due to the masquerade on ether1?), and ether3 can't access the internet at all. I think that I need to masquerade ether3, and SNAT/DNAT or NETMAP ether2, but that doesn't work either, I guess that I need to somehow 'wire' both ether2+3 to ether1. Address list: # ADDRESS NETWORK INTERFACE 0 ;;; public 10.10.10.194/32 10.10.10.192 ether1-gateway 1 ;;; inner DHCP 192.168.100.0/24 192.168.100.0 ether3-private 2 ;;; public 10.10.10.195/32 10.10.10.192 ether2-pub 3 ;;; public 10.10.10.196/32 10.10.10.192 ether3-private NAT 0 ;;; ether3 nat chain=srcnat action=src-nat to-addresses=10.10.10.196 src-address=192.168.100.0/24 out-interface=ether3-private 1 ;;; ether3 nat chain=dstnat action=dst-nat to-addresses=192.168.100.0/24 in-interface=ether3-private 2 ;;; ether1 masquerade chain=srcnat action=masquerade to-addresses=10.10.10.194 out-interface=ether1-gateway Routes: # DST-ADDRESS PREF-SRC GATEWAY DISTANCE 0 A S 0.0.0.0/0 ether1-gateway 1 2 A S 10.10.10.192/27 10.10.10.195 ether2-pub 1 3 ADC 10.10.10.192/32 10.10.10.195 ether2-pub 0 ether1-gateway ether3-private 4 ADC 192.168.100.0/24 192.168.100.0 ether3-private 0 IP Pools: # NAME RANGES 0 public-pool 10.10.10.201-10.10.10.220 1 private-pool 192.168.100.2-192.168.100.254 DHCP configs: # NAME INTERFACE RELAY ADDRESS-POOL LEASE-TIME ADD-ARP 0 public-dhcp ether2-pub public-pool 3d 1 private-dhcp ether3-private private-pool 3d Thanks!

    Read the article

  • PHP-FPM for nginx on debian

    - by Jelko
    What is the preferred/recommended way of installing php-fpm on debian for use with nginx? I read about a "php5-fpm" package everywhere, but it's not available in the official debian repos any more. The PHP-FPM website (http://php-fpm.org/download/) says that fpm is now included with the php core. Is it enough to install "php5-common" then? Where are the config files, though? Other people recommend to install the current version of php and php-fpm from dotdeb.org. The versions provided there are generally more up to date. But is it secure? Is this a good repo to use in a production environment? I would appreciate any advice.

    Read the article

  • Microsoft Remote Desktop Bandwidth Usage

    - by Salman A
    I am wondering how much bandwidth in terms of bytes sent/received is consumed by a typical remote desktop session. I need to know this because our ISP enforces a cap on monthly bandwidth usage (i.e. the total amount data in GB that can sent or received in a month). So just wondering like how much KBs or MBs are transferred per hour in an average RDP session.

    Read the article

  • How to install imagemagick on windows 7

    - by learner
    How to install image magic at windows 7. I followed these instruction To install IMagick on windows xp (php 5.2.x) 1.) download and install ImageMagick-6.5.8-7 Q16-windows-dll.exe imagemagick.org/download/binaries/ ImageMagick-6.5.8-7-Q16-windows-dll.exe 2.) download php_imagick_dyn-Q16.dll from: valokuva.org/outside-blog-content/ imagick-windows-builds/080709/ copy dll to [PHP]/extension dir and rename it to php_imagick.dll 3.) You have to edit your php.ini file and add new extension "extension=php_imagick.dll" 4.) Save ini file and restart apache server. (If necessary, restart your windows) 5.) phpinfo() should show imagick enabled. after that I execute a sample script but its not working. it shows the Imagic class missing error. Please help me to install Imagick. :-(

    Read the article

  • Spam issues while using Postfix as a two-way relay

    - by BenGC
    I want to use a Postfix box to do two things: Relay mail from any host on the internet addressed to one of my domains to my Zimbra server Relay mail from my Zimbra server to any address on the internet. To try and accomplish this I have configured Postfix thusly: mynetworks = 127.0.0.0/8, zimbra_ip/32 myorigin = zimbra_server mydestination = localhost, zimbra_server relay_domains = example.com example.org transport_maps = hash:/etc/postfix/transport_map local_transport = error:no mailboxes on this host transport_map looks like this: example.com smtp:[zimbra_server] example.org smtp:[zimbra_server] Now, this works and passes the Open Relay tests. However, I am seeing in the maillog that the server is relaying spam that has a From: address of <> to domains that are not mine. How do I stop this behavior?

    Read the article

  • Managing MS SQL Server 2005/8 with third-party tools

    - by Craig
    I am trying to access a SQL Server database housed on a ISP. Normally one would simply install an express version of SQL server and use the Management Studio therein. Yeah, not me! Are there any third party tools that will allow me to manage my database? Lightweight ones would be the best but I'm not that picky. :)

    Read the article

  • IPv6 tunnel broker setup

    - by fred basset
    I'm working on a solution to allow remote Linux nodes that are behind firewalls to be accessible for SSH and web server. Can anyone suggest an IPv6 tunnel scheme that would work with NAT firewalls? And what software would be needed on the remote nodes and the central server? Also I do not believe the ISP at either side does native IPv6. A solution where we could have static IPv6 addresses on the remote Linux nodes would be ideal. Thank you, Fred

    Read the article

  • Why does my allow_url_include not work?

    - by autthapone
    Server Information: CentOS 5.7 (Final), PHP Version 5.2.6, Apache/2.2.3 I edit in /etc/php.ini change to allow_url_include = On. Then restart apache. I see configuration on phpinfo() file, but allow_url_include not changed. It's Off yet. Help me, please. My Setting. - http://postimage.org/image/aliuyb9a3/ My phpinfo - http://postimage.org/image/tlsu18b1h/ I can't find other php.ini file. upload_max_filesize also not changed :-( but max_execution_time and memory_limit is changed. Everyone, if issue can't solving now, please guide me about repair/re-install PHP on CentOS.

    Read the article

  • jboss 4: enable UsersRolesLoginModule, where must users.properties files be placed?

    - by golemwashere
    I have an application (CQ5) that requires enabling unauthenticatedIdentity on jbossdir/conf/login-config.xml I used: <authentication> <login-module code = "org.jboss.security.auth.spi.UsersRolesLoginModule" flag = "required" > <module-option name="unauthenticatedIdentity">nobody</module-option> </login-module> </authentication> then I tried to copy jbossdir/conf/props/jmx-console-users.properties,jmx-console-roles.properties into users.properties and roles.properies (same dir). I still get this error: ERROR [org.jboss.security.auth.spi.UsersRolesLoginModule] Failed to load users/passwords/role files java.io.IOException: No properties file: users.properties or defaults: defaultUsers.properties found where should I put those files?

    Read the article

  • Choosing my first Domain Registrar?

    - by user36914
    This will be the first domain i've ever registered. So i'm at a loss what to look for. I definitely don't want to go with GoDaddy. Here are my requirements: Must have unlimited email forwards for my domain Easy to transfer away if i choose. Must not be one of those shady registrars that will try to auction your domain at the end. Ability to create sub domains Domain Registration is Private I would like a domain registrar that would let me use my dynamic ip of my ISP (Cable) if i want to. So hopefully they would have some type program that would detect IP changes and update accordingly So i've looked at a variety of registrars so far. The three left were really NameCheap, DreamHost, & DomainMonster. I have heard good things about DreamHost but i think its off the list because they don't give you any information about the features you get when you register your domain with them. They have a "Whats included" button the page but it mainly list the features with hosting not registration. DomainMonster looks pretty cool but i don't see anything about subdomains. Also i would assume they don't have a system for dynamic ip address updating. So you would have to constantly check that your ip of your ISP has changed or not and update it manually. NameCheap also looks nice. There are two things i really like about them. Right on their feature page they list "Free Dynamic DNS With Client" which is pretty cool. They also have a free SSL certificate for the first year. Haven't messed a lot with certificates but this would definitely be something i would use. Only minus i can see is you only get free private whois for the first year. After that its $2.99 which isn't that big of a deal. I'm leaning towards NameCheap now. Is this a good choice. Is there anything else i should be looking at?

    Read the article

  • Access Home Network Server via External Address (DSL vs Cable)

    - by Dominic Barnes
    For the last few months, I've been using a server on my home network for basic backups and hosting some small websites. Up until this past week, I've been using Comcast (cable) as an ISP and now that I've moved into an apartment, I'm using AT&T. (DSL) I've set up dynamic DNS and I can verify it works externally. However, I can't seem to access the public address from within the local network. Is there something DSL does differently from Cable that makes this frustration possible?

    Read the article

  • Connecting to VNC Server behind Router

    - by waiwai933
    Right now, I have a Mac behind a Time Capsule1, which is behind a router installed by my ISP. In System Preferences, I've enabled Remote Management, but the IP address it gives me is a local IP address. How can I set up either my VNC Server (my Mac) or my VNC Client to connect to each other? N.B. My Mac is running Leopard, if that makes a difference. 1 Equivalent to an Airport Extreme

    Read the article

  • Multiple PPPoe connection to multiple computers via 1 modem?

    - by Koong
    I have a TP-link Wireless Router TP-WR542G and a Zyxel Prestige 650m-61 modem. I have 2 separate PPPoe accounts from my ISP. I'm thinking of using those 2 accounts simultaneously using my 1 modem. I wanted to have 1 account to be connected to the wireless router and 1 account to my computer. Is that possible? How? TQ

    Read the article

  • Set up linux box for secure local hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms Virtualhosts In the rssh section above I added a user to use for SFTP. In this users' home directory, I created a folder called 'https'. This is where the documents for this site will live, so I need to add a virtualhost that will point to it. I will use the above virtual interface for this site (herein called dev.site.local). vi /etc/http/conf/httpd.conf Add the following to the end of httpd.conf: <VirtualHost 192.168.1.3:80> ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> I put a dummy index.html file in the https directory just to check everything out. I tried browsing to it, and was met with permission denied errors. The logs only gave an obscure reference to what was going on: [Mon May 17 14:57:11 2010] [error] [client 192.168.1.100] (13)Permission denied: access to /index.html denied I tried chmod 777 et. al., but to no avail. Turns out, I needed to chmod+x the https directory and its' parent directories. chmod +x /home chmod +x /home/dev chmod +x /home/dev/https This solved that problem. DNS I'm handling DNS via our local Windows Server 2003 box. However, the CentOS documentation for BIND can be found here: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-bind.html SSL To get SSL working, I changed the following in httpd.conf: NameVirtualHost 192.168.1.3:443 #make sure this line is in httpd.conf <VirtualHost 192.168.1.3:443> #change port to 443 ServerAdmin [email protected] DocumentRoot /home/dev/https ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Unfortunately, I keep getting (Error code: ssl_error_rx_record_too_long) errors when trying to access a page with SSL. As JamesHannah gracefully pointed out below, I had not set up the locations of the certs in httpd.conf, and thusly was getting the page thrown at the broswer as the cert making the browser balk. So first, I needed to set up a CA and make certificate files. I found a great (if old) walkthrough on the process here: http://www.debian-administration.org/articles/284. Here are the relevant steps I took from that article: mkdir /home/CA cd /home/CA/ mkdir newcerts private echo '01' > serial touch index.txt #this and the above command are for the database that will keep track of certs Create an openssl.cnf file in the /home/CA/ dir and edit it per the walkthrough linked above. (For reference, my finished openssl.cnf file looked like this: http://pastebin.com/raw.php?i=hnZDij4T) openssl req -new -x509 -extensions v3_ca -keyout private/cakey.pem -out cacert.pem -days 3650 -config ./openssl.cnf #this creates the cacert.pem which gets distributed and imported to the browser(s) Modified openssl.cnf again per walkthrough instructions. openssl req -new -nodes -out dev.req.pem -config ./openssl.cnf #generates certificate request, and key.pem which I renamed dev.key.pem. Modified openssl.cnf again per walkthrough instructions. openssl ca -out dev.cert.pem -config ./openssl.cnf -infiles dev.req.pem #create and sign certificate. cp dev.cert.pem /home/dev/certs/cert.pem cp dev.key.pem /home/certs/key.pem I updated httpd.conf to reflect the certs and turn SSLEngine on: NameVirtualHost 192.168.1.3:443 <VirtualHost 192.168.1.3:443> ServerAdmin [email protected] DocumentRoot /home/dev/https SSLEngine on SSLCertificateFile /home/dev/certs/cert.pem SSLCertificateKeyFile /home/dev/certs/key.pem ServerName dev.site.local ErrorLog /home/dev/logs/error_log TransferLog /home/dev/logs/access_log </VirtualHost> Put the CA cert.pem in a web-accessible place, and downloaded/imported it into my browser. Now I can visit https://dev.site.local with no errors or warnings. And this is where I'm at. I will keep editing this as I make progress. Any tips on how to configure SSL email would be appreciated.

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org//ht/d/sp/i/17770

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >