Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 476/1328 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Configure iptables with a bridge and static IPs

    - by Andrew Koester
    I have my server set up with several public IP addresses, with a network configuration as follows (with example IPs): eth0 \- br0 - 1.1.1.2 |- [VM 1's eth0] | |- 1.1.1.3 | \- 1.1.1.4 \- [VM 2's eth0] \- 1.1.1.5 My question is, how do I set up iptables with different rules for the actual physical server as well as the VMs? I don't mind having the VMs doing their own iptables, but I'd like br0 to have a different set of rules. Right now I can only let everything through, which is not the desired behavior (as br0 is exposed). Thanks!

    Read the article

  • DNS propagation

    - by Paddington
    I have 1 primary DNS server (ns1.mydomain.com) running on Fedora and 2 secondary ones (ns2 and ns3). DNS changes made on my web servers first goes to the primary name server and then propagates to the secondary servers. After making a DNS change on a domain on the web server, I can't see the new dns information on my ns1 when I perform: dig @ns1 A blahblah.com I then went to the master records on the names server (uses named) in the directory /var/named/run-root/var/named/masters and I see the A record has been updated appropriately. Tailing the logs /var/log/messages is not showing any errors. What could be the issue?

    Read the article

  • Is it possible reinstall packages in Ubuntu without an internet connection?

    - by javamatt
    Hi everyone, While experiencing some massive problems with MYSQL, I completely removed a package called rsyslog, and I can no longer get on the internet to use the package manager to correct my mistake. I also got rid of librdf0 as well (oops). I would like to download the missing packages onto a CD with another computer, and manually reinstall them on my Ubuntu platform. Any ideas where to find these? (I am assuming this is the package I need. Either way, I still need to get access to the correct packages and install them). Thank you all very much in advance. Matt

    Read the article

  • Add the "SAMBA File Server" role to a server running SCO Unix?

    - by I.T. Support
    We're trying to get network access to a hard drive on a server running SCO Unix from Windows Servers. I beleive we need to add the role "SAMBA File Server" to the server so we can mount the drive as a network share that we can access from Windows. Is it possible to add the SAMBA role to a SCO Unix operating system? Are there any gotchas or concerns? Thanks

    Read the article

  • eAccelerator problems. Please help

    - by Hakzona
    I just recently installed eaccelerator and switched php to dso everything was fine on server Server load was about 1-2 (4 cpus) and after some time server got overloaded (Server load increased to 250) and server stopped. In suphp mode server was overloading by traffic so i decided to switch it to eaccelerator and now i am lost... Can someone explain that?

    Read the article

  • Root SSH/SFTP Always 777

    - by Fluidbyte
    I have an Ubuntu serve that I'm connecting to via SFTP (and also an SSHFS mount locally). When I move a file to the server via the mount I need it to have permissions set to 777. I've added umask 000 to the .bashrc file at the advice of a friend and it doesn't appear to be working. Basically I'm working completely in a restricted folder and need the root to always leave the permissions open - wether I'm SSH'ed in or moving files to the server.

    Read the article

  • Setting umask for all users

    - by Yarin
    I'm trying to set the default umask to 002 for all users including root on my CentOS box. According to this and other answers, this can be achieved by editing /etc/profile. However the comments at the top of that file say: It's NOT a good idea to change this file unless you know what you are doing. It's much better to create a custom.sh shell script in /etc/profile.d/ to make custom changes to your environment, as this will prevent the need for merging in future updates. So I went ahead and created the following file: /etc/profile.d/myapp.sh with the single line: umask 002 Now, when I create a file logged in as root, the file is born with 664 permissions, the way I had hoped. But files created by my Apache wsgi application, or files created with sudo, still default to 644 permissions... $ touch newfile (as root): Result = 664 (Works) $ sudo touch newfile: Result = 644 (Doesn't work) Files created by Apache wsgi app: Result = 644 (Doesn't work) Files created by Python's RotatingFileHandler: Result = 644 (Doesn't work) Why is this happening, and how can I ensure 664 file permissions system wide, no matter what creates the file? UPDATE: I ended up finding a cleaner solution to this on a per-directory basis using ACLs, which I describe here.

    Read the article

  • How to generate new CSRs for TLS use in sendmail?

    - by Mikey B
    SendMail 8.13.8 | CentOS 5.x Hi Guys, I'm using ca-signed TLS certificates on my sendmail server and they are up for renewal soon. Our new CA doesn't like our old CSR so I need to generate a new CSR. Can someone point me to the procedure for doing this (without affecting the production certs that are already in use)? I'm paranoid of overwriting the old TLS certs in the process of generating a CSR. Most of the instructions I've found are for implementing self-signed TLS certs -- which isn't an option for me at this time. I'm thinking it would something like: openssl req -new -nodes -out new-tls.csr -keyout new-tls-private.key But I wasn't sure if I was missing some options there such as the -x509 option... -M

    Read the article

  • Triple-Boot + 4 partition Limit

    - by dsimcha
    I just bought a new hard drive so that I could convert my XP-only machine into an XP-Ubuntu-Windows 7 triple boot machine. Since the drive is absurdly huge (1 TB) I wouldn't mind throwing ReactOS into the mix, too. I just found out that master boot records are limited to 4 entries, meaning 4 primary partitions. I had Windows XP set up on my old drive as a boot partition, a program files partition and a media partition. Since I really didn't want to install XP from scratch, I cloned this setup on my new drive. This leaves me one MBR partition entry for installing Windows 7, Ubuntu and ReactOS. I'd like to avoid having to install XP from scratch like the plague, partly because it's supposed to be a safety net in case things go wrong with my other OS's and because I've invested a lot of time getting it set up exactly the way I like it. Here are the options I've considered and why I don't like them: Install Windows 7 on my media partition. This would work, but I prefer to keep my media partition completely separate from any OS, so that I can reformat an OS partition without affecting my media partition at all. Use wubi or something to install Ubuntu in the same partition as something else. Again, this is brittle. Move all my media to a logical drive on an extended partition. Create another logical drive on this extended partition for Ubuntu. The problem here is that extended partitions are rather brittle--if you nuke one, it renders the rest useless. Just put the old drive back in my computer and run XP off it. Use the new one for the other OS's. The problem here is that the old drive is slower and uses extra power, generates extra heat, etc. Can anyone suggest any other possibilities that I may have overlooked?

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • Httpd and LDAP Authentication not working for sub-pages

    - by DavisTasar
    I just recently installed a Nagios implementation, and I'm trying to get LDAP authentication working for httpd on Red Hat. (nagios.conf for Apache config below, sanitized of course) ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin" <Directory "/usr/local/nagios/sbin"> #SSLRequireSSL Options ExecCGI AllowOverride none AuthType Basic AuthName "LDAP Authentication" AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthzLDAPAuthoritative off AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Alias /nagios "/usr/local/nagios/share" <Directory /usr/local/nagios/share> #SSLRequireSSL Options None AllowOverride none AuthBasicProvider ldap AuthType Basic AuthName "LDAP Authentication" AuthzLDAPAuthoritative off AuthLDAPURL "ldap://my.domain.controller:389/OU=Users,DC=my,DC=domain,DC=controller?sAMAccountName?sub?(objectClass=user)" NONE AuthLDAPBindDN "CN=NagiosAdmin,DC=my,DC=domain,DC=controller" AuthLDAPBindPassword "myPassword" require valid-user </Directory> Now, the initial authentication works, so when you first hit the page you can log in just fine. However, when you go anywhere else, it prompts you for authentication, fails (asking for a re-prompt), and gives this error message: [Mon Oct 21 14:46:23 2013] [error] [client 172.28.9.30] access to /nagios/cgi-bin/statusmap.cgi failed, reason: verification of user id '<myuseraccount>' not configured, referer: http://<nagiosserver>/nagios/side.php I'm almost certain its a simple flag or option, but I just can't find it, and I don't have a lot of experience working with Apache. Any assistance or help would be greatly appreciated.

    Read the article

  • DirectAdmin Centos4 server has virus

    - by Rogier21
    Hello all, I have a problem with a webserver that runs Centos4 with DirectAdmin. Since a few weeks some websites hosted on it are not redirecting on search engines properly, they are redirected to some malware site, resulting in a ban from google. Now I have used 3 virusscanners: ClamAV: Didn't find anything Bitdefender: Found a 2-3 files with JS infection, deleted them AVG: Finds lots of files, but doesn't have the option to clean! The virus that it finds is: JS/Redir JS/Dropper Still the strange thing is: website a (www.aa.com) does not have any infected files (have gone through all the files manually, is a custom PHP app, nothing special) but does still have the same virus. Website b (www.bb.com) does have the infected files as only one. I deleted all these files and suspended the account, but no luck, still the same error. I do get the log entries on the website from the searchengines so the DNS entries are not changed. But now I have gone through the httpd files but cannot find anything. Where can I start looking for this?

    Read the article

  • nginx virtual hosts are not working, all vhosts goes to the default one

    - by Adirael
    Hello, I just did a clean install of nginx + php-fpm on a VPS running Ubuntu 10.10, nginx is serving and PHP is working fine, but I'm not able to add vhosts to it. Well, I can add them, but only one works, the rest go to this first one. This is my first vhost, for host1: server { listen 80; server_name host1; access_log /var/log/nginx/host1.log; error_log /var/log/nginx/host1.error.log; location / { root /var/www/vhosts/host1/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host1/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } And the second one, for host2: server { listen 80; server_name host2; access_log /var/log/nginx/host2.log; error_log /var/log/nginx/host2.error.log; location / { root /var/www/vhosts/host2/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host2/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } The problem is, when I go to http://host1 everything is fine, but on http://host2, it just shows host1! I don't have Apache installed and everything comes from repos. Any pointers?

    Read the article

  • My NTFS Partition keeps becoming "unusable" on Ubuntu, Any Ideas?

    - by gopherman
    I just purchased a new 2TB Drive External Seagate, My main system uses both Windows and Ubuntu So I am pretty much stuck with keeping my drive as NTFS. I have done this without any problems before but since I got this new drive I have been having issues. When I first load up Ubuntu the drive mounts and runs fine, after an unspecified amount of time i start getting Input/Output errors when accessing the drive. When I goto the Disk Utility I get a message stating the drive is "Unknown or Unused", If I disconnect and reconnect the drive or reboot everything is fine again. There's no errors coming up with S.M.A.R.T and it seems to work fine while under windows. Any thoughts?

    Read the article

  • My NTFS Partition keeps becoming "unusable" on Ubuntu, Any Ideas?

    - by gopherman
    I just purchased a new 2TB Drive External Seagate, My main system uses both Windows and Ubuntu So I am pretty much stuck with keeping my drive as NTFS. I have done this without any problems before but since I got this new drive I have been having issues. When I first load up Ubuntu the drive mounts and runs fine, after an unspecified amount of time i start getting Input/Output errors when accessing the drive. When I goto the Disk Utility I get a message stating the drive is "Unknown or Unused", If I disconnect and reconnect the drive or reboot everything is fine again. There's no errors coming up with S.M.A.R.T and it seems to work fine while under windows. Any thoughts?

    Read the article

  • Lots of files being used by blank web page. What are they?

    - by byronyasgur
    I am trying to optimise a website and I was using the network waterfall facility in Google Chrome. When I looked at the results there were lots of files which I didnt recognise. I first thought they might be something to do with Google Chrome itself, so I put a blank HTML file on my desktop and checked but there was nothing in the waterfall except the file itself. So I put a blank file on my server and I got the output below. What are all these files, are they all necessary, is this normal and do I need to be in any way concerned. My hosting provider has always been excellent in every regard that I'm aware of. My host is shared hosting, using cpanel and is based on a LAMP server. I also note that a couple of those file have problems but I have no idea how to fault find that or whether it's a concern. EDIT: I have cleared the cache so I don't think it's a browser cache issue.

    Read the article

  • CoovaAP router does not get DHCP settings from modem

    - by Dan Sosedoff
    Im having some serious trouble making CoovaAP-powered router get network settings from modem. Wireless router works in captive portal mode (so it handles user login on the same device). But DNS settings and connection ip (for router) are not set as they supposed to via DHCP on modem. It makes installation in other location really complicated, because you have to go and set everything manually. For now i use google`s dns servers. What`s the way to make router self-configurable via modem's dhcp ?

    Read the article

  • Create a video stream from an image

    - by skerit
    I have a network camera that is only able to give me an image, not a video stream. In my home network this image is at http://192.168.1.16/loginfree.jpg I'm able to get this image a few times per second. I would now like to be able to serve it up as a video stream so I can use it in zoneminder. Any idea as to how I do that? I've messed around with named pipes and ffmpeg's image2pipe, but I can't get it to work properly.

    Read the article

  • I have a perl script that is supposed to run indefinitely. It's being killed... how do I determine who or what kills it?

    - by John O
    I run the perl script in screen (I can log in and check debug output). Nothing in the logic of the script should be capable of killing it quite this dead. I'm one of only two people with access to the server, and the other guy swears that it isn't him (and we both have quite a bit of money riding on it continuing to run without a hitch). I have no reason to believe that some hacker has managed to get a shell or anything like that. I have very little reason to suspect the admins of the host operation (bandwidth/cpu-wise, this script is pretty lightweight). Screen continues to run, but at the end of the output of the perl script I see "Killed" and it has dropped back to a prompt. How do I go about testing what is whacking the damn thing? I've checked crontab, nothing in there that would kill random/non-random processes. Nothing in any of the log files gives any hint. It will run from 2 to 8 hours, it would seem (and on my mac at home, it will run well over 24 hours without a problem). The server is running Ubuntu version something or other, I can look that up if it matters.

    Read the article

  • vim: remove previous code indentation and convert to another

    - by ramgorur
    I have a c project with multiple files (more than 100), the codes are written in Whitesmiths style, but I want to change them into K&R style indentation. Is it possible to do using vim in an automated way ? For example I have a emacs-lisp script to achieve this -- (progn (find-file "{}") (mark-whole-buffer) (setq indent-tabs-mode nil) (untabify (point-min) (point-max)) (indent-region (point-min) (point-max) nil) (save-buffer)) I was wondering if there is a similar trick that could be done with vim.

    Read the article

  • rdisk value in boot.ini maps to which disk?

    - by MA1
    Hi All Following are the contents of a sample boot.ini: [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional" /NOEXECUTE=OPTIN /FASTDETECT multi(0)disk(0)rdisk(0)partition(2)\WINNT="Windows 2000 Professional" /fastdetect multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="Microsoft Windows XP Home Edition" /NOEXECUTE=OPTIN /FASTDETECT rdisk value tells the physical disk number. so, if i have three hard disks say: /dev/sda /dev/sdb /dev/sdc than how to know which disk(/dev/sda or /dev/sdb or /dev/sdc) is rdisk(0) and which disk is rdisk(1) etc Regards,

    Read the article

  • Recurring Apache 2.0.52 error on CentOS 4 - 'could not create `rewrite_log_lock`'

    - by warren
    I have been seeing a recurring issue on my web server: [Sun May 16 03:10:19 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 04:10:05 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:10:04 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:17:13 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed So far, the only fix I have found to this when it happens is to reboot my server. This is non-ideal :-\ Restarting httpd does not clear the error. df indicates I have 20+ gigs free, and top and free both report 800+ megs (or 1.2 gigs) > df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 40G 18G 23G 44% / # > free total used free shared buffers cached Mem: 1474560 300832 1173728 0 0 0 -/+ buffers/cache: 300832 1173728 Any ideas on why this would recur, and how to prevent/fix it?

    Read the article

  • Logging violations of rules in limits.conf

    - by PaulDaviesC
    I am trying to log the details of the programs that where failed due to the limit cap defined in the limits.conf. My initial plan was to do it using the audit system. The idea was to track the system calls related to limits in the limits.conf that where failed. However the problem with this approach is that , it is not possible to track the violations of cpu time, since that violation do not involve failure of system calls. In the case of CPU time , one thing happens is that the program which violated the cpu time will be delivered a SIGXCPU. So my question is how should I go about logging the programs that violated CPU time? Also is there any limits.conf specific logs available?

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >