Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 210/361 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Mount Docker container contents in host file system

    - by dflemstr
    I want to be able to inspect the contents of a Docker container (read-only). An elegant way of doing this would be to mount the container's contents in a directory. I'm talking about mounting the contents of a container on the host, not about mounting a folder on the host inside a container. I can see that there are two storage drivers in Docker right now: aufs and btrfs. My own Docker install uses btrfs, and browsing to /var/lib/docker/btrfs/subvolumes shows me one directory per Docker container on the system. This is however an implementation detail of Docker and it feels wrong to mount --bind these directories somewhere else. Is there a proper way of doing this, or do I need to patch Docker to support these kinds of mounts?

    Read the article

  • Does Intel Smart Response provide any statistics on the cache usage?

    - by Tom Seddon
    I've set up my Z68-based Core i7 PC with a 60GB SSD dedicated as a Smart Response cache drive. Is there any way I can get any statistics out of it? It would be nice to have some information on how much cache space is actually being used, maybe how much of it was actually accessed recently, and how many reads in general are coming from the SSD rather than from the mechanical disk. These statistics might help to quickly provide some evidence for or against the use of Smart Response, without my having to reinstall Windows on the SSD (etc.) to find out. The Windows ReadyBoost feature has some performance counters you can access via the Windows 7 perfmon tool, for example, which is the kind of thing I'm hoping is somehow available. Smart Response provides no perfmon counters, though, and the Intel Rapid Storage Utility tells you pretty much nothing except that Smart Response is switched on.

    Read the article

  • DVD RW: Are they still relevant for backups?

    - by Harry
    Hello, With the availability of compact USB memory sticks with much, MUCH higher storage capacities is there still any use-case for taking periodic, incremental backups on DVD/RWs? The DVD/RW has an additional annoyance that you cannot drag and drop files to it as easily as you can on a USB memory stick. So, if I have a 4.7GB DVD/RW, I must re-burn the whole image every time I backup new stuff... with possibly rearranged file/folder structure. Secondly, why in this day and age you cannot install a file-system (like ext3 or FAT32) on a DVD/RW... and likewise on CD/RW's as you can on a USB memory stick? Many thanks, /HS

    Read the article

  • Emulate a USB port as a USB flash drive?

    - by Wilco
    Does anyone know of any software that can emulate a USB flash drive through an available USB port in OS X? Perhaps some way to map a directory to a USB port that could then be connected to another device that supports reading USB storage devices? I'd love to connect my laptop to my car's USB port and access files as if it were a USB drive. I know about the target disk mode with firewire (not sure if this is also supported over USB), but I was hoping for something that doesn't require booting outside of the OS (I want to retain use of the machine). Any ideas?

    Read the article

  • DVD RW: Are they still relevant for backups?

    - by Harry
    Hello, With the availability of compact USB memory sticks with much, MUCH higher storage capacities is there still any use-case for taking periodic, incremental backups on DVD/RWs? The DVD/RW has an additional annoyance that you cannot drag and drop files to it as easily as you can on a USB memory stick. So, if I have a 4.7GB DVD/RW, I must re-burn the whole image every time I backup new stuff... with possibly rearranged file/folder structure. Secondly, why in this day and age you cannot install a file-system (like ext3 or FAT32) on a DVD/RW... and likewise on CD/RW's as you can on a USB memory stick? Many thanks, /HS

    Read the article

  • ZFS, dedupe and PST files

    - by Unreason
    I am interested to know what would be expected maximum dedupe ratio for a set of PST files. I have ~40G of pst files from ~15 usres with high level of duplication of attachments. I am running tests to see if I can have significant space savings if I store the data on ZFS with dedupe. For this purpose I have installed a test setup of Nexenta, but was wondering if someone here had already done this and what level of deduplication I might expect (or in another words how sensitive are pst files to block alignment and what are the parameters that can influence the ratio?). Initial test show very low dedupe ratio and I did find explanation that block level dedupe would not be efficient here and that byte level dedupe would be much better (and that it should be performed by application that is aware of internal organization), so I am just double checking here if someone have some more input. Otherwise I will probably be converting PST files to IMAP.

    Read the article

  • USB Device With Embedded Fileserver

    - by Richard Martinez
    I'm attempting to access logs from a proprietary hardware box with no reasonable hope of modifying the software. There is a process on the device to dump log files to a flash drive on the USB port after entering a code sequence. Currently, analysis of the logs requires the following: Physical presence at the device Manual entry of the code sequence Removal of USB device Insertion of USB device into a normal Linux box I'm hoping there is some sort of device that can act as a USB mass storage device but simultaneously make it's contents available as a network file share (wired preferred). Does such a device currently exist? A combo hardware/software solution would also work.

    Read the article

  • Shuttle FB51 mobo does not boot with external USB drive attached [closed]

    - by user127236
    I am repurposing an old Alienware desktop as a home media server. The PC is based on the Shuttle FB51 motherboard. The BIOS is a Phoenix Version 6.00 PG, release date 12/16/2002. I have loaded Ubuntu 12.04 LTS on the internal hard drive. I am using a Western Digital WD Elements 1.5 TB USB 2.0 Desktop External Hard Drive for media storage. When the external drive is plugged in and the PC is powered on, it freezes very early in the BIOS self-test, even before it begins the memory test. If I unplug the drive, the self-test proceeds without further problems. I can plug the USB drive back in when the self-test is complete, and Ubuntu will boot and find the external drive normally. I've tried several changes to the BIOS setup without finding a cure for the boot issue. Any assistance gratefully accepted. JGB

    Read the article

  • Linux RAID0 - relocating member disk

    - by qdot
    I've got an issue I would rather handle with the array online - I am using RAID0 for temporary video storage - data that is low-cost to restore, but that is used frequently. The software array looks like this: md1 : active raid0 sdb1[2] sdc1[3] sdd1[0] sde1[1] 1953487616 blocks 64k chunks I have another partition (sda1) in this system, that I want to use to replace sdc1 (The drives are of varying age, and sdc1 is definitely the slowest one, limiting the entire array's sequential read performance to only 300MB/s). Is there a way to migrate the data from sdc1 to sda1 while the array is still online?

    Read the article

  • nginx hashing on GET parameter

    - by Sparsh Gupta
    I have two Varnish servers and I plan to add more varnish servers. I am using a nginx load balancer to divide traffic to these varnish servers. To utilize maximum RAM of each varnish server, I need that same request reaches same varnish server. Same request can be identified by one GET parameter in the request URL say 'a' In a normal code, I would do something like- (if I need to divide all traffic between 2 Varnish servers) if($arg_a % 2 == 0) { proxy_pass varnish1; } if($arg_a % 2 == 1) { proxy_pass varnish2; } This is basically doing a even / odd check on GET parameter a and then deciding which upstream pool to send the request. My question are- What is the nginx equivalent of such a code. I dont know if nginx accepts modulas Is there a better/ efficient hashing function built in with nginx (0.8.54) which I can possibly use. In future I want to add more upstream pools so I need not to change %2 to %3 %4 and so on Any other alternate way to solve this problem

    Read the article

  • How to Detect that Current (Bash) Shell is a (Vi/Vim) Subshell?

    - by Jeet
    From inside Vi/Vim, I can type: :shell to drop into a shell. Is there any way to detect that I am in a Vi-spawned subshell? The environmental variable SHLVL is 2, but that does not tell me explicitly that I am in a Vi/Vim-spawned subshell. On OS X, the following variables are also set: MYVIMRC, VIMRUNTIME, VIM. How universal are these? Can I count on these being set in any system, if and only if I am in a Vi/Vim subshell? If not, is there any portable, robust and hopefully efficient way to tell that I am in a Vi/Vim subshell? Thanks.

    Read the article

  • cat contents of one file into another file

    - by Attila O.
    I have a large (binary) file that has some corruption near the beginning. Then, I have a second, smaller file that I obtain by starting to download the same file again, but interrupt after I have enough bytes to fix the original one. My question is, how do I simply overwrite the beginning of the large file with the contents of the second, smaller file? I could use cat, tail and head, but that would create a copy of the file. There must be a more efficient way. Oh yes, and I'm looking for a linux command-line solution, if that wasn't obvious. I'm using bash, but I have other shells if that helps.

    Read the article

  • Can't install Ubuntu 12 into VirtualBox (USB not recognized, ISO would not boot)

    - by wvxvw
    I'm trying out VirtualBox 4.18 and wanted to install Ubuntu 12 as a test. After installing VirtualBox (on Debian squeeze/sid), creating a virtual machine for Ubuntu and pointing it in Settings Storage IDE Controllers to the ISO with the proper version of Ubuntu, checked the "Live CD" option. Tried to define the IDE as master / slave, primary / secondary - all to no effect, and trying to boot this system, I'm getting to the screen which says: FATAL: could not read from the boot medium! System halted I've copied the same ISO to the USB stick, and I can boot from the USB (outside VirtualBox). I've looked at couple of tutorials / walk-through, there's nothing I can see that I would've done wrong. So, how would I configure it to boot from the desired ISO? Below is the snapshot with the current settings (sorry, I don't know how to get them as text).

    Read the article

  • Motherboard: Intel S5520HCR s1366 SSI EEB

    - by Crazy_Bash
    I'm building a storage server for online video streaming. I thought about adding two SSD drive for a OS. other 15*(12 SATA & 3 SSD) drives i want to build with aufs XFS and ethernet 4GB/sec network. But I'm confused a little. S5520HCR board supports 6, SATA/300, RAID: 0, 1, 10, Intel ICH10R. Does it mean i can use SATAIII HDD? I'm planing on buying SEAGATE SV35 Series (3.5, 3??, 64??, SATA III-600). also my Chassis supports up-to 16 sata and the motherboard only 6 what kind of sata controller should i use? What's better in terms of performance 1366 or 2011 socket? My server so far: AIC RSC-3EG-80R-SA1S-2 3U Motherboard: Intel S5520HCR s1366 SSI EEB Kingston DDR3 8192Mb PC3-10600 1333MHz (KVR1333D3N9/8G) Seagate 3000GB 64MB 3.5" 7200rpm SATAIII (ST3000DM001) Kingston 480GB SSD 2.5" SATAIII Intel E1G44HTBLK Intel Xeon E5606 2133MHz/L3-8192Kb/QPI s1366 tray SERVER ACC CARD SAS PCIE 16P HBA 9201-16I LSI00244 SGL LSI

    Read the article

  • Server OS: put it on a separate drive? Yes, no, or depends on the situation?

    - by captainentropy
    Hi, I would like opinions, or facts, both preferably, on whether it's ok to install a server's OS on the RAID array or not. I would predict installation on separate drives is the best but I'm interested in the performance. The server in question will have 8 cores (2.4GHz ea.), 24GB RAM, and ~16TB of usable space of server-class drives in RAID10. There is also a subsytem of an ~equivalent size for backup. I will be running CPU/memory intesive applications on this server in addition to it being file storage for my work (research lab). IF I install the OS (haven't decided which one, probably Ubuntu or Fedora or some other good linux distro) on separate drives will there be any performance problems if they aren't configured in RAID10? IF it is better to have the OS on separate drives should I go for 150GB velociraptors in RAID1 or smallish SSD drives in RAID1? Money is unfortunately a factor as I think I'm close to maxing my budget as it is. Thanks!

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • How frequent are network partitions on cloud services?

    - by roja
    Much is made of the CAP trade-off for data storage where conflicts can be introduced if there is a network partition. My question is there any evidence that this is a problem that arises with any significant frequency in modern cloud IAAS services e.g.; EC2, Azure, Rackspace. Is it a problem which, despite being a theoretical roadblock in constructing idealised distributed systems is, in fact, a non-issue for all practical concerns? Has anyone experienced a network partition within one of these systems (within a single data-centre?) If so would you be willing to share any details?

    Read the article

  • Rsync like windows backup tool

    - by Halfgaar
    I need to backup some windows machines and have been unable to find the proper tool. What I need is a tool that does efficient copying of changed files to a windows network location, like Rsync does. In turn, the server will then back that up using rdiff-backup, a tool which does very clever incremental backups. Right now I'm using windows' 7 included backup feature, but I really don't get that. It's too much off-topic, but it doesn't suffice (seems buggy as well). I looked into Amanda, but as soon as it wanted to install MySQL, I aborted. I also tried Deltacopy, but unfortunately, I don't remember what the problem with that was... Any advice for an rsync like tool that just does daily syncs to a network location?

    Read the article

  • Realtime file-level mirroring from local NTFS to network drive

    - by hurfdurf
    We have some data collection machines running WinXP. After a new file is written, we would like to immediately copy the new file to network storage (a NetApp CIFS share) automagically. We need realtime or near realtime copies generated (copy upon filehandle close would be fine -- these are not long-running system logs). Two commercial applications I've found so far are MirrorFile and IBM's Tivoli CDP. Are there any reliable open source programs or simple ways to get Shadow Copy to do something similar? Bonus points if it runs as a service.

    Read the article

  • Statsd, Graphite and graphs

    - by w00t
    I've setup Graphite and statsd and both are running well. I'm using the example-client.py from graphite/examples to measure load values and it's OK. I started doing tests with statsd and at first it seemed ok because it generated some graphs but now it doesn't look quite well. First, this is my storage-schema.conf: pattern = .* retentions = 10:2160,60:10080,600:262974 I'm using this command to send data to statsd: echo 'ssh.invalid_users:1|c'| nc -w 1 -u localhost 8126 it executes, I click Update Graph in the Graphite web interface, it generates a line, hit again Update and the line disappears. If I execute the previous command 5 times, the graph line will reach 2 and it will actually save it. Again running the same command two times, graph line reaches 2 and disappears. I can't find what I have misconfigured. The intended use is this: tail -n 0 -f /var/log/auth.log|grep --line-buffered "Invalid user" | while read line; do echo "ssh.invalid_users:1|c" | nc -w 1 -u localhost 8126; done

    Read the article

  • How to set default permissions for automounted FAT drives in Ubuntu 9.10?

    - by piman
    I've got many FAT32 drives that I'd like to mount in Ubuntu such that they have permission mode 700 for directories and 600 for all other files. By default, they have 755 for all files, which is not particularly useful since almost no non-directories should be executable, and it screws up version control repos hosted on the drives. "Back in the day" I would have had the drives listed in /etc/fstab with the umask/dmask I want and there was no such thing as a default. These days, drives automount under their volume names. Which is great, except now I have no idea how to set the default. I have tried changing the /system/storage/default_options/vfat/mount_options gconf key with no apparently effect. It was 077 initially but the mounted drive reflected a default of 022; changing it and re-inserting the drives resulted in the files still having permission bits of 755.

    Read the article

  • Samba domain trust errors on a specific interface

    - by John K
    We have a windows domain that also has RHEL member servers in it. All the servers have a primary network connection to the LAN, but some servers also have private dedicated links to one of our RHEL servers, which serves as a head to our SAN storage. This particular server is running Samba 3.5.15, and is running in domain authentication mode. Users can access shares on this server without a problem over the LAN connection from our Windows servers, but if a user tries to access the shares over a private link (i.e. a 192.168.1.2 address to the RHEL server) users get an error "The trust relationship between this workstation and the primary domain failed."

    Read the article

  • best way to record local modifications to an application's configuration files

    - by Menelaos Perdikeas
    I often install applications in Linux which don't come in package form but rather one just downloads a tarball, unpacks it, and runs the app out of the exploded folder. To adjust the application to my environment I need to modify the default configuration files, perhaps add an odd script of my own and I would like to have a way to record all these modifications automatically so I can apply them to another environment. Clearly, the modifications can not be reproduced verbatim as things like IP addresses or username need to change from system to system; still an exhaustive record to what was changed and added would be useful. My solution is to use a pattern involving git. Basically after I explode the tarball I do a git init and an initial commit and then I can save to a file the output of git diff and a cat of all files appearing as new in the git status -s. But I am sure there are more efficient ways. ???

    Read the article

  • create symlink to another machine

    - by microchasm
    Hi, I have 2 machines. Both running CentOS. Box1 is webserver with apache, php. Box2 is mysql, and file storage. The files will only be accessible from Box1 within the webapp. I'd like to somehow create a symlink or somesuch on box1 to a folder on box2 where uploaded files can be stored and retrieved. Security in mind, what would be the best way to go about linking these 2 boxes up in a transparent (to apache) way? NB: the boxes are connected directly to each other via a crossover cable; no lan access to box2. Much thanks!

    Read the article

  • When HDD becomes full, how to create a symbolic link to the data store on another disk?

    - by Brij Raj Singh
    I have a Linux Ubuntu machine which has an X GB hard disk. There is folder, say, /opt/software/data. The disk /dev/sda1 is almost full and I have attached another disk at /dev/sda2 which is mounted at /hdd2. Is it possible for me to link the folders /opt/software/data with /hdd2/software/data so, that every file get stored in the /hdd2/software/data but may be referred from the /opt/software/data? I can't do a reinstall of the software that creates this data, to change the default location of storage.

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >