Search Results

Search found 5809 results on 233 pages for 'isolated storage'.

Page 155/233 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Does Intel Smart Response provide any statistics on the cache usage?

    - by Tom Seddon
    I've set up my Z68-based Core i7 PC with a 60GB SSD dedicated as a Smart Response cache drive. Is there any way I can get any statistics out of it? It would be nice to have some information on how much cache space is actually being used, maybe how much of it was actually accessed recently, and how many reads in general are coming from the SSD rather than from the mechanical disk. These statistics might help to quickly provide some evidence for or against the use of Smart Response, without my having to reinstall Windows on the SSD (etc.) to find out. The Windows ReadyBoost feature has some performance counters you can access via the Windows 7 perfmon tool, for example, which is the kind of thing I'm hoping is somehow available. Smart Response provides no perfmon counters, though, and the Intel Rapid Storage Utility tells you pretty much nothing except that Smart Response is switched on.

    Read the article

  • What applications is NTFS preferable for? [closed]

    - by javano
    When building a new server I prefer to deploy Linux as my OS of choice. This gives me the luxury of being able to choose from various file systems (amongst other aspects), and I will choose a different FS for different servers, depending on what they will be used for. With Windows OS variants you can only use NTFS. Have any benchmarks or tests been performed that have shown NTFS to be a preferable choice for a given scenario or application (apart from just "running Windows" because it has to be on NTFS). To clarify what I mean; I might use filesystem X for large transactional storage volumes, but filesystem Y for front end web app servers. If I had a multi-platform application to deploy that (let's pretend) was available on Mac/Win/Lin, is there any type of application or scenario that would benefit from being on NTFS?

    Read the article

  • Common filesystem for servers behind a rackspace load balancer

    - by thanos panousis
    Our PHP application consists of a single web server that will receive files from clients and perform a CPU-intensive analysis on them. Right now, analysis of a single user upload can take 3sec to conclude and take 100% CPU. This makes our system capacity amount to 1/3 requests per second. My team's requirement is to increase capacity without a lot of code reengineering. A possible solution would be to set up a load balancer in front of multiple servers running the same app, connecting to a common DB. The problem is that the analysis outputs files on disk. A load balancer would increase capacity, but then files won't be available between servers so consequent client requests may fail. We are hosted on Rackspace, is there a way to configure some sort of "common" storage for all servers, without having to rewrite our file persistance code? Current code relies on simple fopens etc. What are our options?

    Read the article

  • Is it normal for a SAS drive to have a few bad blocks, or should I replace my drive ASAP?

    - by Nate
    I have a drive—part of a RAID 1 mirror—that has two bad blocks. Adaptec Storage Manger e-mailed me when it detected the blocks. It shows 4 medium errors for that drive, but state is still “optimal”. This is my first time using Adaptec RAID controllers. I don’t know if an occasional bad block is normal, or if I should immediately replace that drive. Update: The drive failed later the same day! The disk subsystem is: Adaptec 6405 with ZMM (2) Seagate near-line SAS drives (ST31000424SS) The other drive hasn’t reported any bad blocks yet. I am running a consistency check.

    Read the article

  • Can't install Ubuntu 12 into VirtualBox (USB not recognized, ISO would not boot)

    - by wvxvw
    I'm trying out VirtualBox 4.18 and wanted to install Ubuntu 12 as a test. After installing VirtualBox (on Debian squeeze/sid), creating a virtual machine for Ubuntu and pointing it in Settings Storage IDE Controllers to the ISO with the proper version of Ubuntu, checked the "Live CD" option. Tried to define the IDE as master / slave, primary / secondary - all to no effect, and trying to boot this system, I'm getting to the screen which says: FATAL: could not read from the boot medium! System halted I've copied the same ISO to the USB stick, and I can boot from the USB (outside VirtualBox). I've looked at couple of tutorials / walk-through, there's nothing I can see that I would've done wrong. So, how would I configure it to boot from the desired ISO? Below is the snapshot with the current settings (sorry, I don't know how to get them as text).

    Read the article

  • USB Device With Embedded Fileserver

    - by Richard Martinez
    I'm attempting to access logs from a proprietary hardware box with no reasonable hope of modifying the software. There is a process on the device to dump log files to a flash drive on the USB port after entering a code sequence. Currently, analysis of the logs requires the following: Physical presence at the device Manual entry of the code sequence Removal of USB device Insertion of USB device into a normal Linux box I'm hoping there is some sort of device that can act as a USB mass storage device but simultaneously make it's contents available as a network file share (wired preferred). Does such a device currently exist? A combo hardware/software solution would also work.

    Read the article

  • Emulate a USB port as a USB flash drive?

    - by Wilco
    Does anyone know of any software that can emulate a USB flash drive through an available USB port in OS X? Perhaps some way to map a directory to a USB port that could then be connected to another device that supports reading USB storage devices? I'd love to connect my laptop to my car's USB port and access files as if it were a USB drive. I know about the target disk mode with firewire (not sure if this is also supported over USB), but I was hoping for something that doesn't require booting outside of the OS (I want to retain use of the machine). Any ideas?

    Read the article

  • Shuttle FB51 mobo does not boot with external USB drive attached [closed]

    - by user127236
    I am repurposing an old Alienware desktop as a home media server. The PC is based on the Shuttle FB51 motherboard. The BIOS is a Phoenix Version 6.00 PG, release date 12/16/2002. I have loaded Ubuntu 12.04 LTS on the internal hard drive. I am using a Western Digital WD Elements 1.5 TB USB 2.0 Desktop External Hard Drive for media storage. When the external drive is plugged in and the PC is powered on, it freezes very early in the BIOS self-test, even before it begins the memory test. If I unplug the drive, the self-test proceeds without further problems. I can plug the USB drive back in when the self-test is complete, and Ubuntu will boot and find the external drive normally. I've tried several changes to the BIOS setup without finding a cure for the boot issue. Any assistance gratefully accepted. JGB

    Read the article

  • Realtime file-level mirroring from local NTFS to network drive

    - by hurfdurf
    We have some data collection machines running WinXP. After a new file is written, we would like to immediately copy the new file to network storage (a NetApp CIFS share) automagically. We need realtime or near realtime copies generated (copy upon filehandle close would be fine -- these are not long-running system logs). Two commercial applications I've found so far are MirrorFile and IBM's Tivoli CDP. Are there any reliable open source programs or simple ways to get Shadow Copy to do something similar? Bonus points if it runs as a service.

    Read the article

  • How frequent are network partitions on cloud services?

    - by roja
    Much is made of the CAP trade-off for data storage where conflicts can be introduced if there is a network partition. My question is there any evidence that this is a problem that arises with any significant frequency in modern cloud IAAS services e.g.; EC2, Azure, Rackspace. Is it a problem which, despite being a theoretical roadblock in constructing idealised distributed systems is, in fact, a non-issue for all practical concerns? Has anyone experienced a network partition within one of these systems (within a single data-centre?) If so would you be willing to share any details?

    Read the article

  • Statsd, Graphite and graphs

    - by w00t
    I've setup Graphite and statsd and both are running well. I'm using the example-client.py from graphite/examples to measure load values and it's OK. I started doing tests with statsd and at first it seemed ok because it generated some graphs but now it doesn't look quite well. First, this is my storage-schema.conf: pattern = .* retentions = 10:2160,60:10080,600:262974 I'm using this command to send data to statsd: echo 'ssh.invalid_users:1|c'| nc -w 1 -u localhost 8126 it executes, I click Update Graph in the Graphite web interface, it generates a line, hit again Update and the line disappears. If I execute the previous command 5 times, the graph line will reach 2 and it will actually save it. Again running the same command two times, graph line reaches 2 and disappears. I can't find what I have misconfigured. The intended use is this: tail -n 0 -f /var/log/auth.log|grep --line-buffered "Invalid user" | while read line; do echo "ssh.invalid_users:1|c" | nc -w 1 -u localhost 8126; done

    Read the article

  • How do I move *all* the installed software on my EC2 isntance to EBS?

    - by drhyde
    I got started with Eric Hammond's great article over at http://aws.amazon.com/articles/1663 where he goes through installing MySQL and configuring it to use EBS. I got that going. I also have a lot of other stuff installed on that EC2 instance: Rails, a bunch of gems, Nginx+Passenger and so on - my understanding is that unless I explicitly configure it to use EBS, all of this sits on the EC2 instance's ephemeral storage - right? How can I move all the software I have installed to EBS - or better yet how can I set up such that going forward also any new gems etc that I install also go to the EBS volume?

    Read the article

  • Server OS: put it on a separate drive? Yes, no, or depends on the situation?

    - by captainentropy
    Hi, I would like opinions, or facts, both preferably, on whether it's ok to install a server's OS on the RAID array or not. I would predict installation on separate drives is the best but I'm interested in the performance. The server in question will have 8 cores (2.4GHz ea.), 24GB RAM, and ~16TB of usable space of server-class drives in RAID10. There is also a subsytem of an ~equivalent size for backup. I will be running CPU/memory intesive applications on this server in addition to it being file storage for my work (research lab). IF I install the OS (haven't decided which one, probably Ubuntu or Fedora or some other good linux distro) on separate drives will there be any performance problems if they aren't configured in RAID10? IF it is better to have the OS on separate drives should I go for 150GB velociraptors in RAID1 or smallish SSD drives in RAID1? Money is unfortunately a factor as I think I'm close to maxing my budget as it is. Thanks!

    Read the article

  • Best Functional Approach

    - by dbyrne
    I have some mutable scala code that I am trying to rewrite in a more functional style. It is a fairly intricate piece of code, so I am trying to refactor it in pieces. My first thought was this: def iterate(count:Int,d:MyComplexType) = { //Generate next value n //Process n causing some side effects return iterate(count - 1, n) } This didn't seem functional at all to me, since I still have side effects mixed throughout my code. My second thought was this: def generateStream(d:MyComplexType):Stream[MyComplexType] = { //Generate next value n return Stream.cons(n, generateStream(n)) } for (n <- generateStream(initialValue).take(2000000)) { //process n causing some side effects } This seemed like a better solution to me, because at least I've isolated my functional value-generation code from the mutable value-processing code. However, this is much less memory efficient because I am generating a large list that I don't really need to store. This leaves me with 3 choices: Write a tail-recursive function, bite the bullet and refactor the value-processing code Use a lazy list. This is not a memory sensitive app (although it is performance sensitive) Come up with a new approach. I guess what I really want is a lazily evaluated sequence where I can discard the values after I've processed them. Any suggestions?

    Read the article

  • Linux RAID0 - relocating member disk

    - by qdot
    I've got an issue I would rather handle with the array online - I am using RAID0 for temporary video storage - data that is low-cost to restore, but that is used frequently. The software array looks like this: md1 : active raid0 sdb1[2] sdc1[3] sdd1[0] sde1[1] 1953487616 blocks 64k chunks I have another partition (sda1) in this system, that I want to use to replace sdc1 (The drives are of varying age, and sdc1 is definitely the slowest one, limiting the entire array's sequential read performance to only 300MB/s). Is there a way to migrate the data from sdc1 to sda1 while the array is still online?

    Read the article

  • Motherboard: Intel S5520HCR s1366 SSI EEB

    - by Crazy_Bash
    I'm building a storage server for online video streaming. I thought about adding two SSD drive for a OS. other 15*(12 SATA & 3 SSD) drives i want to build with aufs XFS and ethernet 4GB/sec network. But I'm confused a little. S5520HCR board supports 6, SATA/300, RAID: 0, 1, 10, Intel ICH10R. Does it mean i can use SATAIII HDD? I'm planing on buying SEAGATE SV35 Series (3.5, 3??, 64??, SATA III-600). also my Chassis supports up-to 16 sata and the motherboard only 6 what kind of sata controller should i use? What's better in terms of performance 1366 or 2011 socket? My server so far: AIC RSC-3EG-80R-SA1S-2 3U Motherboard: Intel S5520HCR s1366 SSI EEB Kingston DDR3 8192Mb PC3-10600 1333MHz (KVR1333D3N9/8G) Seagate 3000GB 64MB 3.5" 7200rpm SATAIII (ST3000DM001) Kingston 480GB SSD 2.5" SATAIII Intel E1G44HTBLK Intel Xeon E5606 2133MHz/L3-8192Kb/QPI s1366 tray SERVER ACC CARD SAS PCIE 16P HBA 9201-16I LSI00244 SGL LSI

    Read the article

  • Samba domain trust errors on a specific interface

    - by John K
    We have a windows domain that also has RHEL member servers in it. All the servers have a primary network connection to the LAN, but some servers also have private dedicated links to one of our RHEL servers, which serves as a head to our SAN storage. This particular server is running Samba 3.5.15, and is running in domain authentication mode. Users can access shares on this server without a problem over the LAN connection from our Windows servers, but if a user tries to access the shares over a private link (i.e. a 192.168.1.2 address to the RHEL server) users get an error "The trust relationship between this workstation and the primary domain failed."

    Read the article

  • Weird behavior of matching array keys after json_decode()

    - by arnorhs
    I've got some very weird behavior in my PHP code. I don't know if this is actually a good SO question, since it almost looks like a bug in PHP. I had this problem in a project of mine and isolated the problem: // json object that will be converted into an array $json = '{"5":"88"}'; $jsonvar = (array) json_decode($json); // notice: Casting to an array // Displaying the array: var_dump($jsonvar); // Testing if the key is there var_dump(isset($jsonvar["5"])); var_dump(isset($jsonvar[5])); That code outputs the following: array(1) { ["5"]=> string(2) "88" } bool(false) bool(false) The big problem: Both of those tests should produce bool(true) - if you create the same array using regular php arrays, this is what you'll see: // Let's create a similar PHP array in a regular manner: $phparr = array("5" => "88"); // Displaying the array: var_dump($phparr); // Testing if the key is there var_dump(isset($phparr["5"])); var_dump(isset($phparr[5])); The output of that: array(1) { [5]=> string(2) "88" } bool(true) bool(true) So this doesn't really make sense. I've tested this on two different installations of PHP/apache. You can copy-paste the code to a php file yourself to test it. It must have something to do with the casting from an object to an array.

    Read the article

  • Heroku: Postgres type operator error after migrating DB from MySQL

    - by sevennineteen
    This is a follow-up to a question I'd asked earlier which phrased this as more of a programming problem than a database problem. http://stackoverflow.com/questions/2935985/postgres-error-with-sinatra-haml-datamapper-on-heroku I believe the problem has been isolated to the storage of the ID column in Heroku's Postgres database after running db:push. In short, my app runs properly on my original MySQL database, but throws Postgres errors on Heroku when executing any query on the ID column, which seems to have been stored in Postgres as TEXT even though it is stored as INT in MySQL. My question is why the ID column is being created as INT in Postgres on the data transfer to Heroku, and whether there's any way for me to prevent this. Here's the output from a heroku console session which demonstrates the issue: Ruby console for myapp.heroku.com >> Post.first.title => "Welcome to First!" >> Post.first.title.class => String >> Post.first.id => 1 >> Post.first.id.class => Fixnum >> Post[1] PostgresError: ERROR: operator does not exist: text = integer LINE 1: ...", "title", "created_at" FROM "posts" WHERE ("id" = 1) ORDER... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. Query: SELECT "id", "name", "email", "url", "title", "created_at" FROM "posts" WHERE ("id" = 1) ORDER BY "id" LIMIT 1 Thanks!

    Read the article

  • How to set default permissions for automounted FAT drives in Ubuntu 9.10?

    - by piman
    I've got many FAT32 drives that I'd like to mount in Ubuntu such that they have permission mode 700 for directories and 600 for all other files. By default, they have 755 for all files, which is not particularly useful since almost no non-directories should be executable, and it screws up version control repos hosted on the drives. "Back in the day" I would have had the drives listed in /etc/fstab with the umask/dmask I want and there was no such thing as a default. These days, drives automount under their volume names. Which is great, except now I have no idea how to set the default. I have tried changing the /system/storage/default_options/vfat/mount_options gconf key with no apparently effect. It was 077 initially but the mounted drive reflected a default of 022; changing it and re-inserting the drives resulted in the files still having permission bits of 755.

    Read the article

  • When HDD becomes full, how to create a symbolic link to the data store on another disk?

    - by Brij Raj Singh
    I have a Linux Ubuntu machine which has an X GB hard disk. There is folder, say, /opt/software/data. The disk /dev/sda1 is almost full and I have attached another disk at /dev/sda2 which is mounted at /hdd2. Is it possible for me to link the folders /opt/software/data with /hdd2/software/data so, that every file get stored in the /hdd2/software/data but may be referred from the /opt/software/data? I can't do a reinstall of the software that creates this data, to change the default location of storage.

    Read the article

  • Why does Mac OS X sometimes complain that a copy failed because a file is in use?

    - by orj
    Recently I've been copying files from DVDs to network storage on my Mac running Leopard 10.5.7. I'm just dragging and dropping in Finder to perform the copy. Occasionally the copy will fail with a dialog complaining that a file is in use. If I repeat the copy generally it completes successfully. I could understand this being a problem if one was trying to move a file and it was open by another app. But none of these files are open in other apps. I just pop the DVD in, drag and drop the files to my NAS's network share and sometimes it fails with the "file in use" error. This is very annoying. Anyone have any ideas?

    Read the article

  • What are the best tricks for learning how to -think- in Objective-C?

    - by Braintapper
    Before I get flamed out for not checking previous questions, I have read most of the tutorials, and have Hillegass' book, as well as O'Reilly's book on it. I'm not asking for tips on Cocoa or what IDE to use. Where my issue lies - my 'mental muscle memory' is making it hard for me to read Objective-C code. I have no problems at all reading Java and C code and understanding what's going on. Maybe I'm getting to old to learn a new syntax, but it's a struggle shifting mental gears and looking at Objective-C code and just "getting it" (I thought it might be an isolated case, but I have other friends who are seasoned devs who have said the same thing). Are there any tricks that any non-Objective-C programmers who now know Objective-C used to help process the syntactical differences when learning it? For some reason, I get dyslexic when reading Objective-C code. Maybe I'm not meant to be able to learn it (and that's ok too). I was hoping/wondering if there might be others who have had the same experience.

    Read the article

  • Esxi with iSCSI SAN slows down with many multiple VMs running

    - by varesa
    I have a server with ESXi 5 and iSCSI attached network storage(4x1Tb Raid-Z on freenas). Those two machines are connected to each other with Gigabit ethernet, and a procurve switch in between. After a while, if I have many(4-5 or more) vms running, they start to get un-responsive (long delays before anything happens). We are trying to find the reason behind this. Today we looked at esxtop, and found that DAVG of that iSCSI LUN stays at 70-80. I read that +30 is critical! What could be causing those high response-times?

    Read the article

  • How do I usb tether my Cyanogen modded G1's internet connection to my Toshiba Tecra 8000 running Xubuntu?

    - by atticus
    I have usb-tethering enabled in my phone. It works fine with Vista. When I plug my phone into my Tecra 8000 laptop running Xubuntu, dmesg shows: "usb 1-1: new full speed USB device using uhci_hcd and address 8". I see that the OS has detected it as a storage device, but I can't get it to function correctly as a network device. /dev/us* shows no usb0, but it does show /dev/usbdev1.1_ep00 /dev/usbdev1.1_ep81 /dev/usbdev1.8_ep00 ... usbdev1.8_ep83 I could just use the wireless tether app for android, but I can't get my Netgear wg511 v2 (made in China) wireless card to work in this laptop either. But that's another post for later.

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >