Search Results

Search found 24623 results on 985 pages for 'linux'.

Page 376/985 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • IPtables: DNAT not working

    - by GetFree
    In a CentOS server I have, I want to forward port 8080 to a third-party webserver. So I added this rule: iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination thirdparty_server_ip:80 But it doesn't seem to work. In an effort to debug the process, I added these two LOG rules: iptables -t mangle -A PREROUTING -p tcp --src my_laptop_ip --dport ! 22 -j LOG --log-level warning --log-prefix "[_REQUEST_COMING_FROM_CLIENT_] " iptables -t nat -A POSTROUTING -p tcp --dst thirdparty_server_ip -j LOG --log-level warning --log-prefix "[_REQUEST_BEING_FORWARDED_] " (the --dport ! 22 part is there just to filter out the SSH traffic so that my log file doesn't get flooded) According to this page the mangle/PREROUTING chain is the first one to process incomming packets and the nat/POSTROUTING chain is the last one to process outgoing packets. And since the nat/PREROUTING chain comes in the middle of the other two, the three rules should do this: the rule in mangle/PREROUTING logs the incomming packets the rule in nat/PREROUTING modifies the packets (it changes the dest IP and port) the rule in nat/POSTROUTING logs the modified packets about to be forwarded Although the first rule does log incomming packets comming from my laptop, the third rule doesn't log the packets which are supposed to be modified by the second rule. It does log, however, packets that are produced in the server, hence I know the two LOG rules are working properly. Why are the packets not being forwarded, or at least why are they not being logged by the third rule? PS: there are no more rules than those three. All other chains in all tables are empty and with policy ACCEPT.

    Read the article

  • cp command force

    - by user121196
    currently there's a xxx dir already in /home/yyy I'm trying to overwrite it cp -fr ../xxx /home/yyy/ doesn't work still prompts me to overwrite the individual files. how do I fix it?

    Read the article

  • df shows negative values for used

    - by GriffinHeart
    Hey everyone, first question around here. I have a centos 5.2 server and running df -h i get this: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 672G -551M 638G 0% / /dev/hda1 99M 12M 82M 13% /boot tmpfs 2.0G 0 2.0G 0% /dev/shm that space wasn't even near 10% usage the last time it showed a correct value, i'm at a loss with whats going on. Thanks.

    Read the article

  • Debian wheezy keyboard shortcut for both opening and closing a terminal

    - by Peter
    I recently installed tilda and I would like to open it and close with the same keyboard shortcut. I wrote little something in bash that closes tilda if it is open and opens tilda when there is no such a process in ps -ef. It looks like this: a=ps -ef | fgrep -i tilda | cut -d' ' -f4 | head -1;if [ $a ] ; then kill $a; else tilda; fi It seems to be working (at least partially) when I commit this in terminal, but when I assign this command to specific keyboard shortcut (for example alt+1) it does nothing. Any suggestions? btw. is it possible to assign this shortcut for button '`' like in Quake?

    Read the article

  • How to append to a file as sudo?

    - by obvio171
    I want to do: echo "something" >> /etc/config_file But, since only the root user has write permission to this file, I can't do that. But this: sudo echo "something" >> /etc/config_file also doesn't work. Is there any way to append to a file in that situation without having to first open it with a sudo'd editor and then appending the new content by hand?

    Read the article

  • install grub on disk image

    - by Dima
    I have disk image with 2 partitions: Partition 1 has cramfs file system (read only). This partition contains all system files of the OS Partition 2 has ext3 file system. This partition has only configuration files that may be changed. How can I install GRUB1 boot loader on MBR. I tried to copy first 446 bytes of my hard disk and copy GRUB files to the /boot directory on the 1st (cramfs) partition. I cannot use grub-install because I have disk image and not disk itself. Any ideas?

    Read the article

  • How to speed up apache

    - by Zen_silence
    We have a server with 8Cores, 16GB of RAM and RAID 0 SAS 10K drives. Our goal is to use this to serve a fairly simple php application quickly. We have tested all other components and we think we have narrowed it down to apache is our bottleneck. I am no apache guru I have done some research and tested a couple things but when i test with JMeter launching 100 concurrent connections against the server the first 10 - 20 come back quickly 30 - 100ms but the rest take between 1000ms to 3000ms. Anyone have any ideas on what to change in our apache config to make this faster right now its a vanilla install of apache.

    Read the article

  • Clear / Flush cached memory

    - by TheDave
    I have a small VPS with 6GB RAM hosting a couple of websites. Recently I have noticed that my cached memory size is quite high - see below: Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.2%hi, 0.4%si, 0.0%st Mem: 6113256k total, 5949620k used, 163636k free, 398584k buffers Swap: 1048564k total, 104k used, 1048460k free, 3586468k cached After investigating if there is some method to have this flushed or cleared I stumbled upon a command which is: sync; echo 3 > /proc/sys/vm/drop_caches I read it could be useful to add this to a chron-task/job. Is this method recommended or could this lead to potential problems? The only concern I have is that I use one Magento installation on Memcached - could this have any negative effects on it? I am certainly not a pro therefore I would very much appreciate some expert advise. PS: My VPS runs on CentOS 5 x64 and I have WHM + NGINX installed.

    Read the article

  • Name of log file where boot process is logged

    - by ant2009
    Hello, CentOS 5.3 After booting up. I am wondering what is the name of the log file that contains if all services where successfully loaded or not? For example when computer boots you get a list of start services and they can be OK or FAILED. Is there a log file where this information is kept? I had a look in the following directory /var/log/ but not sure which one will contain the informaiton that I need. Many thanks for any advice,

    Read the article

  • Why does CPU processing time matter when compared to real wall clock time?

    - by PeanutsMonkey
    I am running the command time 7zr a -mx=9 sample.7z sample.log to gauge how long it takes to compress a file larger than 1GB. The results I get are as follows. real 10m40.156s user 17m38.862s sys 0m5.944s I have a basic understanding of the difference but don't understand how this plays a role in the time in takes to compress the file. For example should I be looking at real or user + sys?

    Read the article

  • Restrict SSH user to connection from one machine

    - by Jonathan
    During set-up of a home server (running Kubuntu 10.04), I created an admin user for performing administrative tasks that may require an unmounted home. This user has a home directory on the root partition of the box. The machine has an internet-facing SSH server, and I have restricted the set of users that can connect via SSH, but I would like to restrict it further by making admin only accessible from my laptop (or perhaps only from the local 192.168.1.0/24 range). I currently have only an AllowGroups ssh-users with myself and admin as members of the ssh-users group. What I want is something that works like you may expect this setup to work (but it doesn't): $ groups jonathan ... ssh-users $ groups admin ... ssh-restricted-users $ cat /etc/ssh/sshd_config ... AllowGroups ssh-users [email protected].* ... Is there a way to do this? I have also tried this, but it did not work (admin could still log in remotely): AllowUsers [email protected].* * AllowGroups ssh-users with admin a member of ssh-users. I would also be fine with only allowing admin to log in with a key, and disallowing password logins, but I could find no general setting for sshd; there is a setting that requires root logins to use a key, but not for general users.

    Read the article

  • Copy files from sub directories into one directory.

    - by Derek Organ
    Ok I have a bunch of files in this file structure format. /backup/daily/database1/database1-2011-01-01.sql /backup/daily/database1/database1-2011-01-02.sql /backup/daily/database1/database1-2011-01-03.sql /backup/daily/database1/database1-2011-01-04.sql /backup/daily/database1/database1-2011-01-05.sql /backup/daily/database1/database1-2011-01-06.sql /backup/daily/database1/database1-2011-01-07.sql /backup/daily/anotherdb/anotherdb-2011-01-01.sql /backup/daily/anotherdb/anotherdb-2011-01-02.sql /backup/daily/anotherdb/anotherdb-2011-01-03.sql /backup/daily/anotherdb/anotherdb-2011-01-04.sql /backup/daily/anotherdb/anotherdb-2011-01-05.sql /backup/daily/anotherdb/anotherdb-2011-01-06.sql /backup/daily/anotherdb/anotherdb-2011-01-07.sql /backup/daily/stuff/stuff-2011-01-01.sql /backup/daily/stuff/stuff-2011-01-02.sql /backup/daily/stuff/stuff-2011-01-03.sql /backup/daily/stuff/stuff-2011-01-04.sql /backup/daily/stuff/stuff-2011-01-05.sql /backup/daily/stuff/stuff-2011-01-06.sql /backup/daily/stuff/stuff-2011-01-07.sql And there are lots lots more. ultimately I want to import all the 2011-01-07.sql files into my mysql database. This works for one mysql -u root -ppassword < /backup/daily/database1/database1-2011-01-07.sql That will nicely restore that database from this backupfile. I want to run a process where it does this for all databases. So my plan is to first cp all 2011-01-07 sql files into a tmp dir e.g. cp /backup/daily/*/*2011-01-07*.sql /tmp/all The command above unfortunately isn't working I get an error: cp: cannot stat ..... No such file or directory So can you guys help me out with this. For bonus points if you can tell me how to do the next step which is import all databases in one command doing one at a time that would be great too. I really want to do these in two separate steps because I need to delete a few sql files manually from the tmp dir before I run the restore command. So I need: 1) command to copy all 2011-01-07 sql files to a tmp dir 2) command to import all those files in that dir into mysql I know its possible to do in one but for lots of reasons I really would prefer to do it in two steps.

    Read the article

  • Different files on shared partition?

    - by Matt Robertson
    I am dual-booting Windows 8 and Ubuntu 12.04. My partition scheme looks like this: /dev/sda1 - Windows 8 (nfts) /dev/sda2 - Ubuntu / (ext4) /dev/sda3 - Ubuntu home (ext4) /dev/sda5 - swap /dev/sda6 - Shared data partition (exfat) (First off, yes I do have exfat libraries installed on Ubuntu) I created some PNG images in Windows and saved them on my shared partition. From Ubuntu, I edited the images in GIMP and saved them (replacing the ones on the shared partition). When I boot into Windows, the files appear unchanged - exactly like they did before I edited them from Ubuntu. I even added a folder and deleted some other files, but none of these changes exist in Windows. When I boot into Ubuntu, all of the changes are still there. It is as if Windows is caching the old file structure... How is this possible? Thanks in advance. Edit -- commands output ~~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk +-sda1 8:1 0 165.1G 0 part +-sda2 8:2 0 21.3G 0 part / +-sda3 8:3 0 98.9G 0 part /home +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 7.8G 0 part [SWAP] +-sda6 8:6 0 172.7G 0 part /mnt/shared_data ~~ /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # /dev/sda2 UUID=8f700f65-b5c7-4afc-a6fb-8f9271e0fb5e / ext4 errors=remount-ro 0 1 # /dev/sda3 UUID=f0d688b7-22bd-4fa7-bc1b-a594af2933fa /home ext4 defaults 0 2 # /dev/sda5 UUID=3bc2399b-5deb-4f04-924b-d4fc77491997 none swap sw 0 0 # /dev/sda6 UUID=F2DE-BC47 /mnt/shared_data exfat defaults 0 3 ~~ /etc/mtab /dev/sda2 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /home ext4 rw 0 0 /dev/sda6 /mnt/shared_data fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matt/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matt 0 0

    Read the article

  • Apache - The name

    - by Josh
    I am working on a migration to a newer virtualized server. The old one has Apache 2.2.4 according to the old servers phpinfo(). The new one with the most up to date has 2.2.3 . How can this be assuming no trickery is involved? The old one is years old. Alot of the guides I reference use apache2 in folders names and many of the conventions. The newest version of things, as I understand it is called httpd. Did apache change the name from what it originally was? (i.e. break the web server component into its own project called httpd, I realize the original daemon was probably still called httpd)

    Read the article

  • How to change a physical partition system to LVM?

    - by Daniel Hernández
    I have a server with Debian that have 3 physical partitions covering all the disk: boot, root y swap. Now I want to replace that partitions with LVM partitions. I know how install Debian with LVM at beginning, but in this case I can't install the system at beginning because the provider gets me a server with remote access and the system installed in this way. How can I change that partitions using only an ssh connection and possibly other remote server where to put some temporal data?

    Read the article

  • Making libmagic/file detect .docx files

    - by Jonatan Littke
    As seen elsewhere, docx, xlsx and pttx are ZIPs. When uploading them to my web application, file (via libmagic andpython-magic) detects them as being ZIP. I store the contents of the file as a blob in the database, but naturally I don't want to trust the user with what kind of file type this is. So I would like to trust file for and automatically generate a filename during download. I know one can modify /etc/magic but the format (magic(5)) is way too complicated for me. I found a bug report on the issue at Debian bugs but since it's from 2008 it doesn't seem to be fixed any time soon. I guess my only other alternative is to indeed trust the user (but still store the contents as a blob) and only check the file extension based on the file name. This way I can disallow some extensions and allow others. And when the user re-downloads his file, he can have it in whatever way he uploaded it. But this solution is insecure if the file is shared with others, since you can simply rename the file to allow uploading it. Any ideas? Lastly, I found a list of magic numbers for docx etc, but I'm unable to convert these into the magic(5) format.

    Read the article

  • Keepalived for more than 20 virtual addresses

    - by cvaldemar
    I have set up keepalived on two Debian machines for high availability, but I've run into the maximum number of virtual IP's I can assign to my vrrp_instance. How would I go about configuring and failing over 20+ virtual IP's? This is the, very simple, setup: LB01: 10.200.85.1 LB02: 10.200.85.2 Virtual IPs: 10.200.85.100 - 10.200.85.200 Each machine is also running Apache (later Nginx) binding on the virtual IPs for SSL client certificate termination and proxying to backend webservers. The reason I need so many VIP's is the inability to use VirtualHost on HTTPS. This is my keepalived.conf: vrrp_script chk_apache2 { script "killall -0 apache2" interval 2 weight 2 } vrrp_instance VI_1 { interface eth0 state MASTER virtual_router_id 51 priority 101 virtual_ipaddress { 10.200.85.100 . . all the way to . 10.200.85.200 } An identical configuration is on the BACKUP machine, and it's working fine, but only up to the 20th IP. I have found a HOWTO discussing this problem. Basically, they suggest having just one VIP and routing all traffic "via" this one IP, and "all will be well". Is this a good approach? I'm running pfSense firewalls in front of the machines. Quote from the above link: ip route add $VNET/N via $VIP or route add $VNET netmask w.x.y.z gw $VIP Thanks in advance. EDIT: @David Schwartz said it would make sense to add a route, so I tried adding a static route to the pfSense firewall, but that didn't work as I expected it would. pfSense route: Interface: LAN Destination network: 10.200.85.200/32 (virtual IP) Gateway: 10.200.85.100 (floating virtual IP) Description: Route to VIP .100 I also made sure I had packet forwarding enabled on my hosts: $ cat /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.ip_nonlocal_bind=1 Am I doing this wrong? I also removed all VIPs from the keepalived.conf so it only fails over 10.200.85.100.

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make the device active again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question. Edit: My /etc/mdadm/mdadm.conf looks like this. I've never touched this file, at least by hand. # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR <my mail address> # definitions of existing MD arrays # This file was auto-generated on Wed, 27 Jan 2010 17:14:36 +0200 In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.) Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...] and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!

    Read the article

  • Removing DS_Store files and variants?

    - by Ron Gejman
    I am running an Ubuntu 10.04.1 LTS server. Frequently I open up files using AFP from my Mac. Inevitably this created .DS_Store files on the server (although for some reason they are named :2eDS_Store. However, it also creates variants on DS_Store files. These variants are often named similarly to other files in that directory. E.g.: ~$ ls total 60K -rw-r--r-- 1 tarakhovsky 16K 2010-11-30 18:28 :2eDS_Store drwx--S--- 4 tarakhovsky 4.0K 2010-11-08 13:58 :2eTemporaryItems/ lrwxrwxrwx 1 tarakhovsky 15 2010-10-19 17:44 bigdisk -> /media/bigdisk// ... drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-03 18:24 Temporary Items/ drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-30 01:34 tmp/ ... I've disabled creation of DS_Store files using: defaults write com.apple.desktopservices DSDontWriteNetworkStores true so hopefully this won't continue to occur—but I really want to get rid of all of the existing variants of DS_Store files already on the server. Any ideas as to why these variants are being created and how I can get rid of them all?

    Read the article

  • create video from jpg images using ffmpeg

    - by floppydisk
    I want to make short timelapse video using ffmpeg under ubuntu 12.04 LTS. I have a folder containing all images with names DSC_0000.jpg DSC_0001.jpg and so on. I found this question ffmpeg: create a video from images and I try to run the same command as mentioned there: ffmpeg -i DSC_%d.jpg -vcodec mpeg4 timelapse.avi and it fails with DSC_%d.jpg: No such file or directory I've also tried ffmpeg -i DSC_%04d.jpg -vcodec mpeg4 timelapse.avi and it fails with the same error And also for some reason my ffmpeg does not understand option -start_number, if I run ffmpeg -start_number 0 -i DSC_%d.jpg -vcodec mpeg4 timelapse.avi I get this error: Unrecognized option 'start_number' Failed to set value '0' for option 'start_number' I would appreciate any help

    Read the article

  • Why does cifs asks for su rights to write any data into it?

    - by Denys S.
    I'm mounting a windows share as follows: sudo mount -t cifs //192.168.178.49/public -o users,username=name,dom=domain,password=pword /mnt/nas Then I'm trying to create a simple file with some basic text: touch /mnt/nas/me.txt And get an error, however, the file is created (contains 0B of data though): touch: cannot touch ‘me.txt’: Permission denied With sudo it works flawless. How can I allow my current user to write data to the share? Is there a mount option?

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >