Search Results

Search found 24666 results on 987 pages for 'cooperative linux'.

Page 84/987 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Secure copying (file transfer) between two Linux servers in the same datacenter (Linode)

    - by MountainX
    I have two Linodes in the same data center. I want to copy files from one to the other each night or on demand (for about the next month, until this project is finished). So I'm thinking about using rsync. My question is how do I set up the two Linode servers to communicate via private IP addresses securely? Both servers are SSH hardened, they use denyhosts and have a fairly restrictive iptables setup. I know I need to first assign private IP addresses to each server, then configure static networking according to this guide. What is next? What SSH or iptables settings are needed to allow these two servers to communicate? What further info do I need to supply in this question? I'm looking for a basic step-by-step guide for how to do this.

    Read the article

  • Linux QoS: bulk data transmission during idle times

    - by syneticon-dj
    How would I do a QoS setup where a certain low-priority data stream would get up to X Mbps of bandwidth, but only if the current total bandwidth (of all streams/classes) on this interface does not exceed X? At the same time, other data streams / classes must not be limited to X. The use case is an ISP billing the traffic by calculating the bandwidth average over 5 minute intervals and billing the maximum. I would like to keep the maximum usage to a minimum (i.e. quench the bulk transfer during interface busy times) but get the data through during idle/low traffic times. Looking at the frequently used classful schedulers CBQ, HTB and HSFC I cannot see a straightforward way to accomplish this.

    Read the article

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • Compressed disk image on Linux

    - by Aaron Digulla
    I just got my new computer with a much bigger harddisk. I think I copied all important files over but just to be sure, I'd like to keep a disk image of my old disk. To save space, I'd like to compress it but I didn't find an option to mount a compressed image. My goals: Result must be easy to access No need to decompress the whole thing before I can access anything Files should be quick to locate - no TAR/CPIO archive Necessary space should be less than just copying the files over So ideally, I'm looking for a read-only, compressed file system which I can create in a file and which grows automatically.

    Read the article

  • [Linux] Bind/map Character to alt+[some key]?

    - by Paul
    OS: Ubuntu In programming and various terminal programs (Screen, Vim) the [, ], { and } tends to be used a lot. I'm using a Norwegian keyboard where these are placed such that I have to stretch my fingers a bit too long for whats comfortable. To make it easier I though I'd try to make alt+[some key] be one of these characters. Is there a way that I can bind, say alt+æ (Norwegian letter) to '{' system wide? Btw, is such thing called binding, mapping or something else? I'm getting a bit confused by the terms... :)

    Read the article

  • Restricting access to a subdirectory on linux

    - by David
    I'm looking for a way to make a directory accessible only to its parent directories. That is, suppose you have two directories, A and B, at the same level in the file hierarchy. Now suppose that you have a directory A' which is a subdirectory of A. I'd like to enforce that A is able to access the contents of A' but B is not. My problem is that I'd like to use a library (directory A) which builds on top of a legacy version of another library (directory A'). At the same time, I want to be able to use the newest version of this legacy library (directory B). I want to make sure that people aren't somehow using library A and linking against new library B by enforcing that library A must use library A'. I could just link A against library B, but then I'm risking compatibility.

    Read the article

  • Linux: don't use file system cache under a directory

    - by GetFree
    For a PHP website I'm monitoring, I need to see what files are being used each time the browser makes a request. I thought of using find . -type f -amin 1. With that I get all files which were read in the last minute (it's a developing server so only I am using the website). I took care of removing the noatime attribute from the mounting point. However there must be something else that's preventing the kernel from reading the actual files on disk because the access time is not being updated when I read a file. I guess it must be the file-system cache which is retrieving the files from memory. Is there a way to disable file caching under a specific directory? (public_html in my case) Also I read somewhere that there is the nobh mounting atributes which apparently disables file caching under that mounting point, but I'm not sure.

    Read the article

  • Reinitialize GPU on RADEON HD 7970 under linux

    - by user1610662
    I have got a RADEON HD 7970 sapphire on Debian Squeeze. Since I often use it with running GPU codes, sometimes the performances highly decrease as I test it with "glxgears" tool (I get only 20 FPS in fullscreen). So I would like to be able to reinitialize the GPU without reboot the system. I know the "clinfo" tool which display the features of the graphics card. Is there a tool which allows to do this reinitialization ?

    Read the article

  • connect to a headless virtualbox instance in Linux?

    - by 130490868091234
    I've started a headless virtualbox instance with this command: VBoxManage startvm "Ensembl67VirtualMachine" --type headless Waiting for VM "Ensembl67VirtualMachine" to power on... VM "Ensembl67VirtualMachine" has been successfully started. It is set up with Remote Desktop Server Port:5555 with Authentication Method: Null and Extended Features: Allow Multiple Connections and it's now running, but I don't know how to connect to it from the same laptop where it's running. I would like to be able to have it running on a terminal. I tried this but nothing happens: rdesktop localhost:5555 ERROR: localhost: unable to connect rdesktop 192.168.1.1:5555 Any ideas?

    Read the article

  • Linux Program Source Management

    - by Blackninja543
    This particular problem has little do with SubVersion Repositories and more to do with the management of installed programs. My question revolves around the problem of installing a program from source. If I where to build a distro with no package management system what possibilities would I have for maintaining the program is up to date. My only idea would be to keep a record of all the programs installed from source and perform a periodic check to identify if a new version is out.

    Read the article

  • Using terminal vs KDE in linux?

    - by Ke
    Im used to using nautilus within centos but have recently just got a VPS and quickly realising that using a KDE is unacceptable in this environment. Although I do find it so much quicker doing things like folder permissions in KDE rather than typing it all out in the terminal? Everyone I speak to says, use the terminal and I should learn this way as opposed to using the KDE, but theres certain things I just dont get How is it possible to make quick changes to scripts and viewing them in a browser etc , without a mouse or using KDE? and only using a terminal?? I am wondering how to develop websites just using the terminal??? How can it be quicker to type out/view permissions etc in the terminal when its instant and just a few clicks in the KDE? Any thoughts are much appreciated. I would love to understand the benefits but just cant seem to see them right now. Cheers Ke.

    Read the article

  • Advanced merge directory tree with cp in Linux

    - by mtt
    I need to: Copy all of a tree's folders (with all files, including hidden) under /sourcefolder/* preserving user privileges to /destfolder/ If there is a conflict with a file (a file with the same name exists in destfolder), then rename file in destfolder with a standard rule, like add "old" prefix to filename (readme.txt will become oldreadme.txt) copy the conflicted file from source to destination Conflicts between folders should be transparent - if same directory exists in both sourcefolder and destfolder, then preserve it and recursively copy its content according to the above rules. I need also a .txt report that describes all files/folders added to destfolder and files that were renamed. How can I accomplish this?

    Read the article

  • Linux Cups raspberry pi offload processing to server

    - by jaredmsaul
    I am interested in setting up a raspberry pi as the local end of a printing solution. In my testing the pi chokes on acting as a complete cups based print server. It seems a little underpowered for some of the ghostscipt processing and other filtering that occurs-- particularly on larger or complex documents the processing time can be 5 or more minutes. My question is can the processing be largely done elsewhere and the prepared end product of the processing chain be fed to the pi for output on the connected printer? So in this scenario any arbitrary document (html, pdf, text) is initially 'printed' on a relatively powerful machine but the output is stored in a file. This file is then grabbed by the pi and with all the heavy work out of the way easily printed using cups. I know files can be pushed through cups in raw mode but I am fuzzy on the pros and cons and the applicability in what I describe. I have tested this with pdftops creating a ps file then feeding that raw to cups and I think it works but it seems like there may be a cleaner solution. This scenario would ideally work for any number of printer types.

    Read the article

  • Linux: Can't overwrite files on samba store

    - by jonescb
    I'm using CentOS 5.5 with smbclient 3.0.33-3.28-el5 (latest version in repo), and I can't overwrite files in my Samba store. I am not the admin for the Samba server, so there isn't anything I can do server side. But I do have write permission to the server. I know the server runs Windows XP or Server 2003; I don't know. I can delete the file, and then copy the new version over, but I can't overwrite it. Using the cp command I'll get this error: [jonescb@localhost ~]$ cp foo.txt /mnt/si_storage/foo.txt cp: cannot create regular file `/mnt/si_storage/foo.txt': No such file or directory` And if I edit a file on the server using vim, I can save it once, but if I save it again I get this: "/mnt/si_storage/foo.txt" E212: Can't open file for writing This is my /etc/fstab entry for the samba server: //192.168.1.2/SI_STORAGE /mnt/si_storage cifs username=myuser,password=mypass 0 0 Edit: I can overwrite files just fine on my XP machine. The CentOS box is the only one having problems.

    Read the article

  • Linux - How to completety clean up a software installation

    - by Jonathan Rioux
    Hi, I am running under Debian and I have recently upgraded to Squeeze. Since then, I am having so much problems with Webmin. So I have decided to remove it using: apt-get remove webmin And then I downloaded the sources of Webmin 1.530 and compiled it. But the installation process has been stucked for an hour so I canceled it. I even tried to install it using the .deb file without success (installation stucks for hours). From now, I cannot install Webmin since I uninstalled it. So I would like to know how can I make a full clean up of any traces of Webmin on my server. And then I will retry to install it.

    Read the article

  • Linux monitor logs and email alerts?

    - by Physikal
    I have a server with a faulty power button that likes to reboot itself. Usually there are warning signs, like the acpid log file in /var/log starts spamming garbage for about 10hrs or so. Is there an easy way I can have something monitor the acpid log and email me when it has new activity? I wouldn't consider myself extremely advanced so any "guides" you may have for accomplishing something like this would be very helpful and much appreciated. Thank you!

    Read the article

  • Linux wall command won't broadcast strings

    - by mjb
    I read here that this should work, but it doesn't: //usage: wall [file] root@sys:~> mesg is y root@sys:~> wall "who's out there" wall: can't read who's out there. If mesg is set to y, what's preventing me from broadcasting a string? Note, I did confirm that the file option works: root@sys:~> wall test Broadcast Message from root@sys (/dev/pts/1) at 15:23 ... Who's out there? Teach me knowledge please. mjb

    Read the article

  • Wired to wireless bridge in Linux

    - by adrianmcmenamin
    I am attempting to set up my Raspberry Pi as a bridge (but I think this is not a question specific to the hardware) - using Debian wheezy. I have a hostapd.conf: (some details changed for security)... interface=wlan0 bridge=br0 driver=nl80211 auth_algs=1 macaddr_acl=0 ignore_broadcast_ssid=0 logger_syslog=-1 logger_syslog_level=0 hw_mode=g ssid=MY_SSID channel=11 wep_default_key=0 wep_key0=MY_KEY wpa=0 (yes, I know WEP is no good) And this in /etc/network/interfaces auto lo iface lo inet loopback iface eth0 inet dhcp allow-hotplug wlan0 iface wlan0 inet manual wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf iface default inet dhcp auto br0 iface br0 inet dhcp bridge-ports eth0 wlan0 Everything seems to come up ok, but I cannot associate with the bridged wireless connection - even though the flashing lights on the USB stick suggest packets are being exchanged. I have read somewhere that not all cards/devices will run in hostap mode - they won't pass packets in one direction: is that right? (The info was a bit old)- this my card: [ 3.663245] usb 1-1.3.1: new high-speed USB device number 5 using dwc_otg [ 3.794187] usb 1-1.3.1: New USB device found, idVendor=0cf3, idProduct=9271 [ 3.804321] usb 1-1.3.1: New USB device strings: Mfr=16, Product=32, SerialNumber=48 [ 3.816994] usb 1-1.3.1: Product: USB2.0 WLAN [ 3.823790] usb 1-1.3.1: Manufacturer: ATHEROS [ 3.830645] usb 1-1.3.1: SerialNumber: 12345 So, what have I got wrong here?

    Read the article

  • Configuration tools for multiple monitors for X / Linux

    - by richard
    I have Ubuntu 10.04 running gnome and two monitors. I am wondering if a can get a better multi-monitor configuration tool. The one I have, gnome-display-properties, has too many problems, including: when I swapped my monitors over, the narrower one now on the left. There is a width calculation error, such that I have a virtual monitor the width of the wide-monitor on the narrow-monitor and part of the wide monitor. And a virtual narrow-monitor on the remainder of the wide-monitor. I would like: nobugs. to be able to select which is primary monitor. to have multiple configurations. configurations to be automatically selected based on which monitors are attached. configurations to be cycled (reliably) when display mode key is pressed. when a display is deactivated, for windows to migrate to remaining monitors. option to not change display resolution when mirroring, but to use side/top blanking bars to pad out screen.

    Read the article

  • HD read error while booting linux

    - by sidharth sharma
    I have been dual booting windows 7 and ubuntu on my laptop since the past 3 years and all was working fine until I started getting logs like ata1.00: status: { DRDY ERR } ata1.00: error: { UNC } ata1.00: configured for UDMA/133 sd 0:0:0:0: [sda] Unhandled sense code sd 0:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE sd 0:0:0:0: [sda] Sense key: Medium Error [Current][discreptor] I figured it was a hardware problem and ignored it as long as I could until the HD crashed on me. Then I got a brand new HD and put on windows and ubuntu afresh on it but the problem still persists. Any Help?

    Read the article

  • Missing characters when printing in linux

    - by jarvisschultz
    I have a pdf that I was printing recently, and on the final printout there is a single character that doesn't print. It is the greek letter phi, and the pdf was built with pdflatex. The phi shows up in every pdf reader I have tried, and if I convert to a ps using pdftops before printing that solves the problem. Also, I sent the pdf to a buddy who has a very similar machine (Ubuntu 12.04 64 bit, with the same printer drivers), and he was able to print it (to the same printer) and the character showed up. Clearly I have a workaround, but I'm more curious as to where I should be looking to figure out what is causing this bug? What is the printing "toolchain", and where could it be going wrong?

    Read the article

  • Linux - quota per directory?

    - by depesz
    I have following scenarios: Single partition mounted as /, with lots of disk space. There is a range of directories (/pg/tbs1, /pg/tbs2, /pg/tbs3 and so on), and I would like to limit total size of these directories. One option is to make some big files, and then mkfs them, and mount over loopback, and then set quota, but this makes expansion a bit problematic. Is there any other way to make the quota work per directory?

    Read the article

  • linux/shell: change a file's modify timestamp relatively?

    - by index
    My Canon camera produces files like IMG_1234.JPG and MVI_1234.AVI. It also timestamps those files. Unfortunately during a trip to another timezone several cameras were used, one of which did not have the correct time zone set - meta data mess.. Now I would like to correct this (not EXIF, the file's "modify" timestamp on disk). Proposed algorithm: 1 read file's modify date 2 add delta, i.e. hhmmss (preferred: change timezone) 3 write new timestamp Unless someone knows a tool or a combination of tools that do the trick directly, maybe one could simplify the calculation using epoch time (seconds since ..) and whip up a shell script. Any help appreciated!

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >