Search Results

Search found 33802 results on 1353 pages for 'etc'.

Page 133/1353 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • How can a CentOS 6 guest running in VirtualBox be configured as a LAMP server that can be accessed from the Windows host?

    - by jtt89
    I was able to conect Centos6 on Virtual Box to Windows (I can ping in both directions) with Host-only Adapter (for connection between the two) and NAT Adapter (to enable Linux on VB to connect to the Internet). I want to set up httpd, mysql and vsftpd servers and in the end easily connect to httpd from Windows based browser and ftp server with a Windows based client as well. I would also want to have access through SSH. I have a general idea of the steps that are involved, but there is also a configuration that I am not sure about at this point. Lets say I follow these steps: yum install httpd yum install php php-pear php-mysql yum install mysql-server mysql_secure_installation yum install vsftpd yum install mod_ssl Technically I have everything installed, but what would be the next steps that I need to take (from the networking point of view, so to speak) to get it all working)? I know I need to configure, at least Apache, and ftp server, but I am not sure how is it gonna work; like where am I gonna be uloading the sites (I know this can vary), how am I gonna know what address to use in a browser if I wanna go to a website x, y, z on that installation etc. This sounds like I need to do some kind of DNS setup and I am kind of stuck at this point. If somebody could give me a general outline of what are the things that need to be done that would be great (I was looking at a lot of websites and I know about etc/sysconfig/network, httpd.config - not too much about it on Apache's site, hostname, hostname -f etc; but it is kind of hard to piece it all together at this point). I am gonna be looking at the books also, but they not always reflect the setup that I have too (VirtualBox). Thank you.

    Read the article

  • Apache2 refuses to process php files - "Snow Leopard" OSX 10.6.4

    - by w-01
    I have a macbook pro i5. my understanding is that by default it should be able to serve php5. i have uncommented the relevant line in /etc/apache2/httpd.conf LoadModule php5_module libexec/apache2/libphp5.so I have restarted apache with sudo apachectl -k restart and when i try to access a file with a php extension, Apache prompts me to download the file. i.e. instead of processing the php and sending me html, it thinks i want to download the file.... when i look in apache error log i see this [Fri Nov 12 10:16:14 2010] [notice] Apache/2.2.14 (Unix) PHP/5.3.2 mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.2 Python/2.6.1 configured -- resuming normal operations so it looks like php5 is loading properly. I'd like to know either: How do i fix this? or How do I reinstall apache2 so that it's like i just installed the os? thanks in advance update @Zayne - the end of my httpd.conf has Include /private/etc/apache2/other/*.conf and i have a file /etc/apache2/other/php.conf with the contents <IfModule php5_module> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> </IfModule> @Zayne I've already copied php.ini.default to php.ini in the same folder. when i run sudo apachectl configtest i get /usr/sbin/apachectl: line 82: ulimit: open files: cannot modify limit: Invalid argument httpd: Could not reliably determine the server's fully qualified domain name, using ::1 for ServerName Syntax OK furthermore i decided to try apachectl -M which shows all loaded modules Most importantly in the list of loaded modules i got Loaded Modules: php5_module (shared) Since the module is being loaded, it seems like the issue has more to do with making apache use php engine to process the php files.... so something wrong with the ifmodule directive?

    Read the article

  • Postfix installation error on Ubuntu

    - by kgpdeveloper
    How do I fix this error on Ubuntu 10.04 ? Reading package lists... Done Building dependency tree Reading state information... Done postfix is already the newest version. The following packages were automatically installed and are no longer required: libaprutil1-dbd-sqlite3 libcap2 apache2.2-bin libapr1 libaprutil1-ldap libaprutil1 php5-common Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up postfix (2.7.0-1) ... Postfix configuration was not changed. If you need to make changes, edit /etc/postfix/main.cf (and others) as needed. To view Postfix configuration values, see postconf(1). After modifying main.cf, be sure to run '/etc/init.d/postfix reload'. Running newaliases newaliases: warning: valid_hostname: numeric hostname: 202002 newaliases: fatal: file /etc/postfix/main.cf: parameter myhostname: bad parameter value: 202002 dpkg: error processing postfix (--configure): subprocess installed post-installation script returned error exit status 75 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: postfix E: Sub-process /usr/bin/dpkg returned an error code (1) Even if I reboot, the same error shows up. Thanks for the help..

    Read the article

  • Ubuntu 12.04 Server - eth0 1Gbps NIC eth1 10Gbps NIC - all traffic using eth0?

    - by James
    Ubuntu Server 12.04.1 x64 Primary role is an NFS fileserver, for Mac OSX Clients. Hardware: Eth0: 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) Eth1: 07:00.0 Ethernet controller: MYRICOM Inc. Myri-10G Dual-Protocol NIC Config: ifconfig eth0 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.150 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:460042020 errors:0 dropped:148 overruns:0 frame:0 TX packets:231906707 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:581431978417 (581.4 GB) TX bytes:259057368617 (259.0 GB) Interrupt:20 Memory:f7d00000-f7d20000 eth1 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6832208 errors:0 dropped:2 overruns:0 frame:0 TX packets:376 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:513826442 (513.8 MB) TX bytes:33688 (33.6 KB) Interrupt:59 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:507 errors:0 dropped:0 overruns:0 frame:0 TX packets:507 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:45057 (45.0 KB) TX bytes:45057 (45.0 KB) nano /etc/network/interfaces #The loopback network interface auto lo iface lo inet loopback #The primary network interface auto eth0 iface eth0 inet static address 192.168.0.150 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 #second network interface auto eth1 iface eth1 inet static address 192.168.0.100 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 Currently I am using on the OSX clients: nfs://192.168.0.100/Volumes/Storage to mount the NFS share. My problem is why would all the data (and I have checked using various monitoring tools bmon, iftop, glances, etc) be going over the slower connection?? Also, after configuring /etc/network/interfaces with the above setup I always get an error message at bootup something about waiting for network configuration. Are these connected?

    Read the article

  • What causes this logrotate behavior in Puppet?

    - by ujjain
    After running logrotate, Puppet starts writing it's logs into /var/log/puppet/masterhttp.log-20130616. How come it doesn't keep logging in /var/log/puppet/masterhttp.log? It seems normal behavior is renaming the original log-file and start with a clean fresh log-file to start writing in that log file, keeping the other file as a log-archive. [root@puppetmaster puppet]# ls -al total 97520 drwxr-x---. 2 puppet puppet 4096 Jun 16 03:24 . drwxr-xr-x. 12 root root 4096 Jul 1 09:11 .. -rw-r--r--. 1 puppet puppet 0 Jun 16 03:24 masterhttp.log -rw-rw----. 1 puppet puppet 99847187 Jul 1 09:19 masterhttp.log-20130616 [root@puppetmaster init.d]# cat /etc/logrotate.d/puppet /var/log/puppet/*log { missingok notifempty create 0644 puppet puppet sharedscripts postrotate pkill -USR2 -u puppet -f /usr/sbin/puppetmasterd || true [ -e /etc/init.d/puppet ] && /etc/init.d/puppet reload > /dev/null 2>&1 || true endscript } [root@puppetmaster init.d]# How can I make Puppet log to /var/log/puppet/masterhttp.log and not to /var/log/puppet/masterhttp.log-20130616? Even restarting puppet doesn't make it log into /var/log/puppet/masterhttp.log instead of /var/log/puppet/masterhttp.log-20130616.

    Read the article

  • Windows 7 disk backup and clone for deployment to multiple systems

    - by gregmac
    I'm in the process of deploying some new PCs (there's only 8), all identical hardware. What I'd like to do is install Windows 7 (64bit), join to domain etc, install a bunch of other software, and then clone that drive to multiple other machines. I'd also like to be able to use it as a backup image, so the machine can be restored back to that image at some future date. I understand this involves at least sysprep, but I am confused after reading some tutorials that talk about using Windows Automated Installation Kit, or hacks with the registry and custom-build batch files. This process seems overly complex to me: I did something similar 10+ years ago, and and don't remember it being this bad. Surely things have improved in a decade? There's also some products that involve having network servers running deployment software, network boot, etc etc.. this is way more than I want to set up. My systems are all identical hardware. Is there a simplified way to clone PCs? Preferably (since I'm a lazy developer, and not an IT admin) I'd like to find some off-the-shelf product that I can run after I get the machine setup, that will spit out a bootable DVD I can run on all the other systems, which will boot up, ask for a computer name, join it to the domain, and that's it. Does such as product exist?

    Read the article

  • Wifi network undetectable on a Lenovo G470

    - by Rex
    I have a WPA2 secured network with a visible SSID at home that works perfectly fine with a Dell laptop,a HP netbook and sundry mobile phones. When I try connecting my sister's Lenovo G470, it refuses to detect the wifi network no matter what, but shows up the neighbors' networks. The Lenovo also works correctly at her office. Both laptops run Windows 7. Already tried/checked the following: Manually configure wifi network settings (copied over from the Dell) Ensure there is no MAC address filtering on the router. Ensure router DHCP server is not running out of addresses to assign (I have set it to allocate upto 10). Reboot laptop, router etc. Is this a known problem, and is there anything else one could try? Update - The problematic Lenovo uses Windows 7 Home Basic while the Dell that works uses Home Premium and the HP netbook uses Starter edition - if that makes any difference. Further update - It is able to connect if I reboot into safe mode with networking. However in 'normal' mode it shows up the network sporadically, and then says there was an error connecting to it. All the network parameters, password, encryption, etc etc are EXACTLY the same as they are on the Dell.

    Read the article

  • openssl client authentication error: tlsv1 alert unknown ca: ... SSL alert number 48

    - by JoJoeDad
    I've generated a certificate using openssl and place it on the client's machine, but when I try to connect to my server using that certificate, I error mentioned in the subject line back from my server. Here's what I've done. 1) I do a test connect using openssl to see what the acceptable client certificate CA names are for my server, I issue this command from my client machine to my server: openssl s_client -connect myupload.mysite.net:443/cgi-bin/posupload.cgi -prexit and part of what I get back is as follow: Acceptable client certificate CA names /C=US/ST=Colorado/L=England/O=Inteliware/OU=Denver Office/CN=Tim Drake/[email protected] /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=myupload.mysite.net/[email protected] 2) Here is what is in the apache configuration file on the server regarding SSL client authentication: SSLCACertificatePath /etc/apache2/certs SSLVerifyClient require SSLVerifyDepth 10 3) I generated a self-signed client certificate called "client.pem" using mypos.pem and mypos.key, so when I run this command: openssl x509 -in client.pem -noout -issuer -subject -serial here is what is returned: issuer= /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=myupload.mysite.net/[email protected] subject= /C=US/ST=Colorado/O=Inteliware/OU=Denver Office/CN=mlR::mlR/[email protected] serial=0E (please note that mypos.pem is in /etc/apache2/certs/ and mypos.key is saved in /etc/apache2/certs/private/) 4) I put client.pem on the client machine, and on the client machine, I run the following command: openssl s_client -connect myupload.mysite.net:443/cgi-bin/posupload.cgi -status -cert client.pem and I get this error: CONNECTED(00000003) OCSP response: no response sent depth=1 /C=US/ST=Colorado/L=England/O=Inteliware/OU=Denver Office/CN=Tim Drake/[email protected] verify error:num=19:self signed certificate in certificate chain verify return:0 574:error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:/SourceCache/OpenSSL098/OpenSSL098-47/src/ssl/s3_pkt.c:1102:SSL alert number 48 574:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-47/src/ssl/s23_lib.c:182: I'm really stumped as to what I've done wrong. I've searched quite a bit on this error and what I found is that people are saying the issuing CA of the client's certificate is not trusted by the server, yet when I look at the issuer of my client certificate, it matches to one of the accepted CA returned by my server. Can anyone help, please? Thank you in advance.

    Read the article

  • Cannot run logwatch due to Date::Manip issue

    - by Quintin Par
    I tried to run logwatch at follows [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined. My date is as follows root@machine cron.daily]# date Thu Aug 23 06:25:21 GMT 2012 Now based on details in various forums I tried to fix this by setting /etc/timezone to “+0800” but it didn’t work My /etc/localtime points to /usr/share/zoneinfo/GMT and is managed by puppet How do I go about fixing this? I still want all my machines to be in GMT timezone. EDIT: Sadly, Both the changes are not working: [root@machine cron.daily]# cat /etc/TIMEZONE UTC Quanta’s [root@machine cron.daily]# cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export TZ=GMT export PATH [root@machine cron.daily]# source ~/.bash_profile [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined.

    Read the article

  • Ubuntu 12.04 - Pound Reverse Proxy and Adobe Flex/Flash Auth

    - by James
    First time posting, I have a completely fresh install of ubuntu 12.04 Client as a reverse proxy gateway to our internal network. Our setup is we have one external ip but three domains we would like to point to various webservers on our internal network. It's not so much a load balancing issue or cacheing etc. Merely routing some Client browsers to a port 80 webpage (to adhere to some stricter corporate policies regarding placing port numbers after domain names). I have gone with pound and everything seems to be working fine. Static pages load etc. Everything is good with the exception of a Flash/Flex based WebClient for a Digital Asset Management program. The actual static page loads fine, it is just at the moment of entering credentials, be they correct or incorrect, and hitting login, there is no response whatsoever. Either a rejection or confirmation etc. So the request back to the internal server can't be getting through. I have googled extensively and there might be a solution in a crossdomain.xml file? Documentation isn't very clear. And we are not the authors of the DAM app, and have no control over the code on the Flash/Flex side. Questions: Is there a particular config file/solution for pound that allows Flash/Flex auth information to be forwarded? Is there another reverse proxy program (nginx?)that allows this type of config? Am I looking at this the entire wrong way, should Flash/Flex fundamentally not be allowed to have this access?

    Read the article

  • How to force certain traffic through GRE tunnel?

    - by wew
    Here's what I do. Server (public internet is 222.x.x.x): echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf sysctl -p iptunnel add gre1 mode gre local 222.x.x.x remote 115.x.x.x ttl 255 ip add add 192.168.168.1/30 dev gre1 ip link set gre1 up iptables -t nat -A POSTROUTING -s 192.168.168.0/30 -j SNAT --to-source 222.x.x.x iptables -t nat -A PREROUTING -d 222.x.x.x -j DNAT --to-destination 192.168.168.2 Client (public internet is 115.x.x.x): iptunnel add gre1 mode gre local 115.x.x.x remote 222.x.x.x ttl 255 ip add add 192.168.168.2/30 dev gre1 ip link set gre1 up echo '100 tunnel' >> /etc/iproute2/rt_tables ip rule add from 192.168.168.0/30 table tunnel ip route add default via 192.168.168.1 table tunnel Until here, all seems going right. But then 1st question, how to use GRE tunnel as a default route? Client computer is still using 115.x.x.x interface as default. 2nd question, how to force only ICMP traffic to go through tunnel, and everything else go default interface? I try doing this in client computer: ip rule add fwmark 200 table tunnel iptables -t mangle -A OUTPUT -p udp -j MARK --set-mark 200 But after doing this, my ping program will timeout (if I not doing 2 command above, and using ping -I gre1 ip instead, it will works). Later I want to do something else also, like only UDP port 53 through tunnel, etc. 3rd question, in client computer, I force one mysql program to listen on gre1 interface 192.168.168.2. In client computer, there's also one more public interface (IP 114.x.x.x)... How to forward traffic properly using iptables and route so mysql also respond a request coming from this 114.x.x.x public interface?

    Read the article

  • Nginx with PAM authentication through pam_script

    - by Envek
    Have anyone set up such a configuration? It's not work for me. So, I've installed nginx-extras on Ubuntu 12.04 (it's built with PAM module), and write to site config: location ^~ /restricted_place/ { auth_pam "Please specify login and password from main_site"; auth_pam_service_name "nginx"; } Afterwards, in /etc/pam.d/nginx: auth required pam_script.so dir=/path/to/my/auth_scripts And wrote simplest /path/to/my/auth_scripts/pam_script_auth (also I've tried to write complicated scripts) #!/bin/sh exit 0 # should allow anyone Doesn't work. The script is launched (I've wrote full functional script, that successfully executes, check credentials, writes to its own log and returns correct exit code, and executes noticeably long). But no access granted. Only rejected. In /var/log/nginx/error.log appears next record: 2012/09/13 10:44:42 [alert] 1666#0: waitpid() failed (10: No child processes) If I'm specify in /etc/pam.d/nginx: auth required pam_unix.so and grant for www-data user right to read /etc/shadow, unix authorization works fine. But script auth doesn't work. Can't understand, where is trouble. In nginx module, or in pam_script module.

    Read the article

  • o2cb thinks ocfs2 cluster is still online, and refuses to shut down

    - by Kendall
    I have a handful of OpenSuSE 11.2 servers that utilize OCFS2 volumes. I've noticed that o2cb can't figure out when the OCFS2 cluster is actually mounted. For example, when I try to shutdown o2cb, after stopping OCSF2, o2cb refuses to shutdown because it thinks OCFS2 is still up! After stopping OCFS2 I try to stop o2cb... hamguy:/dev/disk/by-label # /etc/init.d/o2cb stop Stopping O2CB cluster ocfs2: Failed Unable to stop cluster as heartbeat region still active So I check the status... hamguy:/dev/disk/by-label # /etc/init.d/o2cb status Driver for "configfs": Loaded Filesystem "configfs": Mounted Stack glue driver: Loaded Stack plugin "o2cb": Loaded Driver for "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted Checking O2CB cluster ocfs2: Online Heartbeat dead threshold = 31 Network idle timeout: 30000 Network keepalive delay: 2000 Network reconnect delay: 2000 Checking O2CB heartbeat: Active And double check OCFS2... hamguy:/dev/disk/by-label # /etc/init.d/ocfs2 status Configured OCFS2 mountpoints: /u/conf /u/logs /u/backup /u/client /u/data /u/mdata OCFS2 is clearly down, while o2cb clearly thinks otherwise. The versions of OCFS2 and o2cb are... kendall@hamguy:~> rpm -qa |grep ocfs2 ocfs2console-1.4.1-25.6.x86_64 ocfs2-tools-o2cb-1.4.1-25.6.x86_64 ocfs2-tools-1.4.1-25.6.x86_64 kendall@hamguy:~> rpm -qa |grep o2cb ocfs2-tools-o2cb-1.4.1-25.6.x86_64 What causes this, and is there a way around it? If I try to reboot the machine, it will just sit there forever until your physically power cycle it. That obviously is a bit of a problem. Any insight is appreciated, thank you. Kendall

    Read the article

  • SSH over HTTPS with proxytunnel and nginx

    - by Thermionix
    I'm trying to setup an ssh over https connection using nginx. I haven't found any working examples, so any help would be appreciated! ~$ cat .ssh/config Host example.net Hostname example.net ProtocolKeepAlives 30 DynamicForward 8118 ProxyCommand /usr/bin/proxytunnel -p ssh.example.net:443 -d localhost:22 -E -v -H "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32)" ~$ ssh [email protected] Local proxy ssh.example.net resolves to 115.xxx.xxx.xxx Connected to ssh.example.net:443 (local proxy) Tunneling to localhost:22 (destination) Communication with local proxy: -> CONNECT localhost:22 HTTP/1.0 -> Proxy-Connection: Keep-Alive -> User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32) <- <html> <- <head><title>400 Bad Request</title></head> <- <body bgcolor="white"> <- <center><h1>400 Bad Request</h1></center> <- <hr><center>nginx/1.0.5</center> <- </body> <- </html> analyze_HTTP: readline failed: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host Nginx config on the server; ~$ cat /etc/nginx/sites-enabled/ssh upstream tunnel { server localhost:22; } server { listen 443; server_name ssh.example.net; location / { proxy_pass http://tunnel; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } ssl on; ssl_certificate /etc/ssl/certs/server.cer; ssl_certificate_key /etc/ssl/private/server.key; } ~$ tail /var/log/nginx/access.log 203.xxx.xxx.xxx - - [08/Feb/2012:15:17:39 +1100] "CONNECT localhost:22 HTTP/1.0" 400 173 "-" "-"

    Read the article

  • I must clear my cmos to be able to boot

    - by Fredou
    I have this Asus p7p55d-e pro for about 8 months(got it last July) and for this last 3-4 days I cannot boot without clearing my CMOS what I have is: Seasonic M12D 750W ASUS P7P55D-E Pro Intel Core i5 760 Quad Core Processor Lynnfield LGA1156 XFX GeForce® 8800 GT Alpha Dog 512MB DDR3 Standard (PV-T88P-YDF4) 2x Corsair XMS3 CMX4GX3M2A1600C7 4GB DDR3 2X2GB DDR3-1600 CL 7-8-7-20 I tried to remove all the unnecessary stuff: HD/dvd/pci card/usb cable/etc I tried with only 1 dimm filled, instead of my 4, each one individually it didn't work I tried changing the battery, here goes a few dollars to nowhere, didn't work if I don't reset the CMOS it sometime stock on RAM led, sometime on BOOT DEVICE led, when this happen, it stuck on CPU speed detection when I boot right after the reset, i MUST click on the F2 option (boot with default bios setting) if i go into the bios and save/restart, i have to reset it again when booted, everything is rock solid stable, tried memtest, cpu stress, etc, etc. without issue what should be my next step? trying a new psu? (i need to find one..) doing rma? (i need this mb since it's my only computer...) something else?

    Read the article

  • Understanding what needs to be in place for a server to send outgoing email from a linux box

    - by Matt
    I am attempting to configure an openSuse 11.1 box to send outgoing email for a domain that the same server is hosting. I don't understand enough about smtp servers and the like to know what needs to be in place and working. The system already had Postfix installed, and I confirmed it was running via a > sudo /etc/init.d/postfix status I examined the Postfix config file in /etc/main.cf and configured a couple of items regarding the domain/host name and such, but left it largely default. I attempted to send an email from the command line with the following command: > echo "test 123" | mail -s "test subject" [email protected] Where differentdomain.com was not the same domain as the one best hosted on the server. However, the email never reaches the target account. Any suggestions? EDIT: In the postfix log, (/var/log/mail.info, there's nothing in .err) I see that postfix is trying to connect to what appears to be a different smtp server on our network, with a connection refused: connect to ourdomain.com.inbound15.mxlogic.net[our ip address]:25: Connection refused However, I can't figure out why it is 1) trying to connect to that server and 2) not just sending the messages itself... I mean, isn't postfix an smtp server? I did a grep -ri on ourdomain from /etc and see no configuration files anywhere telling it to do this. Why is it?

    Read the article

  • ubuntu: Installed php-mcrypt but it doesn't show up in phpinfo()

    - by jules
    A web app I'm trying to install on my ubuntu 10.04 LTS requires mcrypt, and is generating this error: Fatal error: Call to undefined function mcrypt_module_open(). I know this is the same question as this one: Installed php-mcrypt but it doesn't show up in phpinfo(), but I tried several things, none of which worked, and have additional questions. I would comment on the original thread but don't have enough reputation to do so; forgive me for the duplicate question. My versions of php and mcrypt are (both installed via apt-get): php: 5.3.2-1ubuntu4.10 mcrypt: 5.3.2-0ubuntu Doing a php -m shows that the mcrypt module is installed. I installed mcrypt and php5-mcrypt via apt-get. Also, I'm using nginx as my web server. I have tried reinstalling mcrypt and restarting nginx, but still can't get mcrypt to show up on phpinfo() and calls to mcrypt are still broken. Here is some more info: $ php -i | grep "mcrypt" /etc/php5/cli/conf.d/mcrypt.ini, mcrypt mcrypt support => enabled mcrypt.algorithms_dir => no value => no value mcrypt.modes_dir => no value => no value I also checked that mcrypt is on in /etc/php5/cli/conf.d/mcrypt.ini and /etc/php5/cgi/conf.d/mcrypt.ini. Lastly, I'm using fastCGI with nginx. I googled around and saw suggestions to restart php5-fpm. I couldn't find php5-fpm in apt-get, I'm not sure if I still need php5-fpm since I already have fastCGI. Is there anything else I'm missing?

    Read the article

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • Tips on setting up a virtual lab for self-learning networking topics

    - by Harry
    I'm trying to self-learn the following topics on Linux (preferably Fedora): Network programming (using sockets API), especially across proxies and firewalls Proxies (of various kinds like transparent, http, socks...), Firewalls (iptables) and 'basic' Linux security SNAT, DNAT Network admininstration power tools: nc, socat (with all its options), ssh, openssl, etc etc. Now, I know that, ideally, it would be best if I had 'enough' number of physical nodes and physical network equipment (routers, switches, etc) for this self-learning exercise. But, obviously, don't have the budget or the physical space, nor want to be wasteful -- especially, when things could perhaps be simulated/emulated in a Linux environment. I have got one personal workstation, which is a single-homed Fedora desktop with 4GB memory, 200+ GB disk, and a 4-core CPU. I may be able to get 3 to 4 additional low-end Fedora workstations. But all of these -- including mine -- will always remain strictly behind our corporate firewall :-( Now, I know I could use VirtualBox-based virtual nodes, but don't know if there are any better alternatives disk- and memory- footprint-wise. Would you be able to give me some tips or suggestions on how to get started setting up this little budget- and space-constrained 'virtual lab' of mine? For example, how would I create virtual routers? Has someone attempted this sort of thing before: namely, creating a virtual network lab behind a corporate firewall for learning/development/testing purposes? I hope my question is not vague or too open-ended. Basically, right now, I don't know how to best leverage the Linux environment and the various 'goodies' it comes with, and buying physical devices only when it is absolutely necessary.

    Read the article

  • Change XRDP keyboard layout to en-gb Ubuntu 12.04

    - by Earl Sven
    Does anybody know how to change the keyboard layout to en-gb in an XRDP session on Ubuntu 12.04? I am using mstsc.exe to connect to an XRDP server hosting an XVNC session, however I cannot work out how to apply the UK keyboard layout. A bit of googling has yeilded these instructions which allow me to change the keymap, however using the keymap file I downloaded from here I loose the ability to use the arrow keys, home/end etc. Comparing the file with the standard one there are substantially more differences than I would expect considering the similarity between the layouts. I only have RDP access to the box so i don't seem to be able to actually generate a new layout per the instructions above, maybe it's a local console thing? Also I can't change either the RDP client used or the RDP server as they are my only access to the system, I don't have local console access. I do have root priveleges on the OS however. Any thoughts? Edit: I have found http:// xrdp.sourceforge.net/documents/keymap/newkeymap.html (apologies for not typing the link properly but the antispam filter won't let me post more than 2 links) this documentation on the XRDP sourceforge page which describes keymap file format. It indicates the values in the keymap files are unicode 0x64 etc, however the files I have already on my system seem to use a different format 0:0 or 65307:27 etc, does anybody know what the difference is?

    Read the article

  • Domain authentication over OPEN wireless pre-logon (Windows 7 Pro) - No logon servers avail

    - by Shadow00Caster
    I have a plethora of laptops that are joined to an AD domain. I have an enterprise wireless system setup, the users of these laptops will be using an OPEN unsecured SSID which will ultimately have a captive portal that uses Radius-AD auth and firewall rules to allow access pre-captive portal auth to the proper ip's/ports of DC's etc for auth etc. I already have other laptops/users connecting to another SSID with 802.11x and SSO, all works perfectly pre-logon etc. My problem is with this open network, for some reason I cannot get the machines to auth to AD. The laptops connect to the wireless network, I confirm this on the controller and can ping the laptop at startup. I sharked the wires on the 2 DC's that these machines auth to, I can see a DNS SOA update from a laptop im testing with and can ping that test laptop from both DC's. When I try to logon, "There are currently no logon servers available to service the logon request." The shark shows no incoming connections to either DC even though the laptop is connected and pingable. Any help is greatly appreciated.

    Read the article

  • How do I install the main repositories for RHEL6

    - by eisaacson
    We've setup RHEL6 on a new server. As far as we can tell, our subscription is all setup properly. However, when I run yum repolist, it doesn't show any repositories. /etc/yum.repos.d/redhat.repo is empty. I tried pasting in the content from another RHEL6 server's redhat.repo but as soon as I run yum, it wipes it out again. I just need to get the basic RedHat repositories setup so I can install packages. EDIT: Using the GUI, I went to System Administration Red Hat Subscription Manager. Under the 'Products' tab, it did not show any products. EDIT: When I run yum update, here's what I get: # yum update Loaded plugins: product-id, refresh-packagekit, security, subscription-manager This system is receiving updates from Red Hat Subscription Management. Setting up Update Process No Packages marked for Update When I log in to RedHat customer portal, it shows that subscription as active. EDIT: To make sure I wasn't having a subscription issue. I re-registered and re-subscribed. I get all the same results. # subscription-manager register --force # subscription-manager subscribe --pool=*redacted* EDIT: contents of /etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3 contents of /etc/yum/pluginconf.d/rhnplugin.conf: [main] enabled = 0 gpgcheck = 1

    Read the article

  • Connecting to RDS database from EC2 instance using bind9 CNAME alias

    - by mptre
    I'm trying to get internal DNS up and running on a EC2 instance. The main goal is to be able to define CNAME aliases for other AWS services. For example: Instead of using the RDS endpoint, which might change over time, an alias mysql.company.int can be used instead. I'm using bind9 and here's my config files: /etc/bind/named.conf.local zone "company.int" { type master; file "/etc/bind/db.company.int"; }; /etc/bind/db.company.int ; $TTL 3600 @ IN SOA company.int. company.localhost. ( 20120617 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS company.int. @ IN A 127.0.0.1 @ IN AAAA ::1 ; CNAME mysql IN CNAME xxxx.eu-west-1.rds.amazonaws.com. The dig command ensures me my alias is working as excepted: $ dig mysql.company.int ... ;; ANSWER SECTION: mysql.company.int. 3600 IN CNAME xxxx.eu-west-1.rds.amazonaws.com. xxxx.eu-west-1.rds.amazonaws.com. 60 IN CNAME ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. 589575 IN A zzz.zz.zz.zzz ... As far as I can understand a reverse zone isn't needed for a simple CNAME alias. However when I try to connect to MySQL using my newly created alias the operation is giving me a timeout. $ mysql -uuser -ppassword -hmysql.company.int ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql.company.int' (110) Any ideas? Thanks in advantage!

    Read the article

  • Ubuntu 12.04 glusterfs volume failed to mount at boot time

    - by user183394
    I have just setup 7 KVM guests, all running Ubuntu 12.04 LTS 64bit Minimal server to test out glusterfs 3.2.5 from the Ubuntu official repo. Two of them form a mirrored pair (i.e. replica 2), and five of them are clients. I am still new to this file system and would like to gain some "hands-on" experience. The setup was mostly uneventful, until I put in the following into each glusterfs client's /etc/fstab: 192.168.122.120:/testvol /var/local/testvol glusterfs defaults,_netdev 0 0, where 192.168.122.120 is the IP address of the first "glusterfs server". If I issue either a manaul mountall or a mount.glusterfs 192.168.122.120:/testvol /var/local/testvol on CLI, a mount shows that the volume is successfully imported. But once a client is rebooted, after it comes back up, the volume is not mounted! I searched the Internet, and found this article, but since I am not running both client and server on the same node, IMHO it's not strictly applicable. So, as a kludgy "get-around", I put in a sleep 3 && mount.glusterfs 192.168.122.120:/testvol /var/local/testvol into each client node's /etc/rc.local. It seems to be able to get the volume mounted on each node, as far as I can tell. But this is quite ugly, and I would appreciate a hint as to how to resolve this glusterfs-non-boot-time-mounting issue correctly. Note that I used the IP address of the first "glusterfs server" although the /etc/hosts of all nodes have been populated with their hostnames. I figured that the use of IP address is more robust. --Zack

    Read the article

  • Using NginX and Apache alongside for both static and dynamic files

    - by faridv
    Background: I've searched a lot and found these useful threads about using of Apache or NginX for static or dynamic files. But they are old (mostly about 1 or 2 years ago) and I think both webservers, specifically Nginx has had important changes in performance and usage. So I think take another look on these issue cannot be that bad. Nginx (for static files) and Apache (for dynamic content)? nginx better than apache for dynamic content? [closed] Apache or NGINX for PHP? Nginx as reverse proxy to Apache with only dynamic content? My question: I have a PHP web application with lots of dynamic files and lots of static contents (videos, images etc.) and it's currently running on a CentOS 6 server and Apache 2.2 since 2 months ago. In past few days, number of our site visitors have gained so fast and I just thought if this number continues to increase with current ratio, we need to change many things (web server, application, etc.) to prevent failures. Because of hardware limitations that we are facing, I thought that it's best for us to start with web server. Should I start with something else? Should I try to increase performance of my PHP application and forget about web server for now? (even if gonna take a long time!) Because of huge usage of .htaccess files (for redirection, rewrites, etc.), I think it's gonna be painful to migrate to NginX as default web server or maybe only for dynamic files. Does this mean that I can't even use Nginx as reverse proxy? I'm not sure latest stable version of NginX and PHP-FPM have a better performance over my current Apache and my limitations (too many things) won't let me to give it a try. Which one is doing better currently? What will I lose by migrating to Nginx? To make it short, what should I do?

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >