Search Results

Search found 5416 results on 217 pages for 'storage'.

Page 23/217 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • How to use Azure storage for uploading and displaying pictures.

    - by Magnus Karlsson
    Basic set up of Azure storage for local development and production. This is a somewhat completion of the following guide from http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ that also involves a practical example that I believe is commonly used, i.e. upload and present an image from a user.   First we set up for local storage and then we configure for them to work on a web role. Steps: 1. Configure connection string locally. 2. Configure model, controllers and razor views.   1. Setup connectionsstring 1.1 Right click your web role and choose “Properties”. 1.2 Click Settings. 1.3 Add setting. 1.4 Name your setting. This will be the name of the connectionstring. 1.5 Click the ellipsis to the right. (the ellipsis appear when you mark the area. 1.6 The following window appears- Select “Windows Azure storage emulator” and click ok.   Now we have a connection string to use. To be able to use it we need to make sure we have windows azure tools for storage. 2.1 Click Tools –> Library Package manager –> Manage Nuget packages for solution. 2.2 This is what it looks like after it has been added.   Now on to what the code should look like. 3.1 First we need a view which collects images to upload. Here Index.cshtml. 1: @model List<string> 2:  3: @{ 4: ViewBag.Title = "Index"; 5: } 6:  7: <h2>Index</h2> 8: <form action="@Url.Action("Upload")" method="post" enctype="multipart/form-data"> 9:  10: <label for="file">Filename:</label> 11: <input type="file" name="file" id="file1" /> 12: <br /> 13: <label for="file">Filename:</label> 14: <input type="file" name="file" id="file2" /> 15: <br /> 16: <label for="file">Filename:</label> 17: <input type="file" name="file" id="file3" /> 18: <br /> 19: <label for="file">Filename:</label> 20: <input type="file" name="file" id="file4" /> 21: <br /> 22: <input type="submit" value="Submit" /> 23: 24: </form> 25:  26: @foreach (var item in Model) { 27:  28: <img src="@item" alt="Alternate text"/> 29: } 3.2 We need a controller to receive the post. Notice the “containername” string I send to the blobhandler. I use this as a folder for the pictures for each user. If this is not a requirement you could just call it container or anything with small characters directly when creating the container. 1: public ActionResult Upload(IEnumerable<HttpPostedFileBase> file) 2: { 3: BlobHandler bh = new BlobHandler("containername"); 4: bh.Upload(file); 5: var blobUris=bh.GetBlobs(); 6: 7: return RedirectToAction("Index",blobUris); 8: } 3.3 The handler model. I’ll let the comments speak for themselves. 1: public class BlobHandler 2: { 3: // Retrieve storage account from connection string. 4: CloudStorageAccount storageAccount = CloudStorageAccount.Parse( 5: CloudConfigurationManager.GetSetting("StorageConnectionString")); 6: 7: private string imageDirecoryUrl; 8: 9: /// <summary> 10: /// Receives the users Id for where the pictures are and creates 11: /// a blob storage with that name if it does not exist. 12: /// </summary> 13: /// <param name="imageDirecoryUrl"></param> 14: public BlobHandler(string imageDirecoryUrl) 15: { 16: this.imageDirecoryUrl = imageDirecoryUrl; 17: // Create the blob client. 18: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 19: 20: // Retrieve a reference to a container. 21: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 22: 23: // Create the container if it doesn't already exist. 24: container.CreateIfNotExists(); 25: 26: //Make available to everyone 27: container.SetPermissions( 28: new BlobContainerPermissions 29: { 30: PublicAccess = BlobContainerPublicAccessType.Blob 31: }); 32: } 33: 34: public void Upload(IEnumerable<HttpPostedFileBase> file) 35: { 36: // Create the blob client. 37: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 38: 39: // Retrieve a reference to a container. 40: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 41: 42: if (file != null) 43: { 44: foreach (var f in file) 45: { 46: if (f != null) 47: { 48: CloudBlockBlob blockBlob = container.GetBlockBlobReference(f.FileName); 49: blockBlob.UploadFromStream(f.InputStream); 50: } 51: } 52: } 53: } 54: 55: public List<string> GetBlobs() 56: { 57: // Create the blob client. 58: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 59: 60: // Retrieve reference to a previously created container. 61: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 62: 63: List<string> blobs = new List<string>(); 64: 65: // Loop over blobs within the container and output the URI to each of them 66: foreach (var blobItem in container.ListBlobs()) 67: blobs.Add(blobItem.Uri.ToString()); 68: 69: return blobs; 70: } 71: } 3.4 So, when the files have been uploaded we will get them to present them to out user in the index page. Pretty straight forward. In this example we only present the image by sending the Uri’s to the view. A better way would be to save them up in a view model containing URI, metadata, alternate text, and other relevant information but for this example this is all we need.   4. Now press F5 in your solution to try it out. You can see the storage emulator UI here:     4.1 If you get any exceptions or errors I suggest to first check if the service Is running correctly. I had problem with this and they seemed related to the installation and a reboot fixed my problems.     5. Set up for Cloud storage. To do this we need to add configuration for cloud just as we did for local in step one. 5.1 We need our keys to do this. Go to the windows Azure menagement portal, select storage icon to the right and click “Manage keys”. (Image from a different blog post though).   5.2 Do as in step 1.but replace step 1.6 with: 1.6 Choose “Manually entered credentials”. Enter your account name. 1.7 Paste your Account Key from step 5.1. and click ok.   5.3. Save, publish and run! Please feel free to ask any questions using the comments form at the bottom of this page. I will get back to you to help you solve any questions. Our consultancy agency also provides services in the Nordic regions if you would like any further support.

    Read the article

  • Accessing host LVM partition from Windows XP through Virt.manager 0.8.5 / Qemu / KVM

    - by Nico de Smidt
    Hi, requested use case is having a Windows XP SP3 guest running in 64bit Ubuntu. (Linux pcs 2.6.35-22-server #35-Ubuntu SMP Sat Oct 16 22:02:33 UTC 2010 x86_64 GNU/Linux) I want this guest to access an LVM LV on the Ubuntu disk. I've setup the following LVM config: --- Logical volume --- LV Name /dev/storage/sdc1 VG Name storage LV UUID Zg5IMC-OlqB-prL5-fgg4-3A9A-OgKP-oZ0QkJ LV Write Access read/write LV Status available # open 0 LV Size 1.01 GiB Current LE 259 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:3 -- 1) I've setup a storage pool for /dev/storage 2) I've mkfs.vfat /dev/storage/sdc1 3) and made a virtual IDE disk in the virt-manager setup for the guest. Target device: IDE Disk 2 Source path: /dev/storage/sdc1 -- Now when running XP (guest) Windows sees a new disk in Disk Manager and want's to install a partition on it, since it believes the drive is empty. After formatting from within Windows I can put data on the new disk volume. -- Back in Ubuntu however I cannot access this this any more since it created a partition within an LVM Logical Volume. Running fdisk -l shows the following: root@pcs:/media# fdisk -l /dev/storage/sdc1 Disk /dev/storage/sdc1: 1086 MB, 1086324736 bytes 32 heads, 63 sectors/track, 1052 cylinders Units = cylinders of 2016 * 512 = 1032192 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8d72e4f4 Device Boot Start End Blocks Id System /dev/storage/sdc1p1 1 1050 1058368+ c W95 FAT32 (LBA) -- which seems fine to me, but when trying to mount /dev/storage/sdc1p1 I get the following error: mount /dev/storage/sdc1p1 /media/xp mount: special device /dev/storage/sdc1p1 does not exist which makes sense since in lvdisplay sdc1p1 does not exist Main question: I want to mount the vfat partition in both Ubuntu and XP What am I missing here????? regards, and thanks for your consideration. Nico

    Read the article

  • Where did my free space go?

    - by Ari B. Friedman
    I have a storage drive (2TB) and an OS drive (90GB SSD). I've run out of space on the OS drive: /$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 72G 72G 0 100% / udev 5.9G 12K 5.9G 1% /dev tmpfs 2.4G 1.2M 2.4G 1% /run none 5.0M 0 5.0M 0% /run/lock none 5.9G 428K 5.9G 1% /run/shm /dev/sda1 1.9T 639G 1.2T 37% /media/StorageDrive So be it. But when I attempt to figure out where the space has gone, I cannot find it anything remotely approaching the capacity of the drive: /$ sudo du -h -d 1 du: cannot access `./media/StorageDrive/home/ari/.gvfs': Permission denied 675G ./media 2.3G ./var 0 ./proc 7.0M ./tmp 27M ./boot 4.0K ./lib64 12K ./dev 44M ./home 16K ./lost+found 8.0M ./sbin 223M ./lib 4.0K ./selinux 1.4M ./run 140K ./root 8.8M ./bin 4.0K ./mnt 38M ./etc 8.0K ./srv 4.8G ./usr 65M ./opt 0 ./sys 682G . Note the difference between the total (682G) and the mounted drives in /media (675G) is only about 9G. How are 72G being used? Where is this dark matter hiding?

    Read the article

  • mount old ATA disk to USB adapter

    - by 213441265152351
    I am trying to recover data from an old Linux that was installed in a computer on an ATA hard drive. I found a ScanLogic USB-IDE, an ATA adapter to USB 1.0 similar to the one in the picture: and after switching it on, I plugged it into a laptop with Ubuntu 12.04. I am used to the drives being automatically mounted, but this one doesn't show up in /media. After doing a dmesg, all I got is this: [215298.671924] usb 2-1.1: new full-speed USB device number 5 using ehci_hcd [215298.767330] scsi19 : usb-storage 2-1.1:1.0 [215299.841701] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd [215300.017258] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd [215300.197050] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd [215300.372730] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd I tried plugging in the adapter to the three different USB ports in my laptop (one of them USB 3.0), but got no luck with any of them. Any ideas?

    Read the article

  • What are the Crappy Code Games - Tips on how to win?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win Tips on how to win Each test has some different aspect that will define how you win. In this post we will give you some tips on how to try and win. As a starter why not watch some of the sessions from previous SQLBits Storage sessions Sessions on IO The background Who can enter? What are the challenges? What are the prizes? Why should...(read more)

    Read the article

  • Photo sharing social platform [closed]

    - by user1696497
    I am working on a photo sharing social platform like Flickr, Photobucket. To start off with I have half a million photos as of now. I want to convert all of these into a single format, compression ratio and use it as an original image. I will be storing original image, re-sized image according to layout and a thumbnail. I have started off with ruby, didn't find supporting libraries. I am considering python as it has a good image processing library and instagram is using it. I want some advise about how the image has to be processed while uploading, efficient way of storage whether database or a file system, image compressions, and precautions to be taken. I would be having profile pictures, do I need store them separately or along with the images? If I want to store the images on a file system, which file system should I use and also should I store the url or should I use any intermediate key value store like redis?

    Read the article

  • Drives will not show up on PCI RAID card in Ubuntu 12.04 LTS

    - by Mechh69
    Computer specs Intel E8200 Dual Core MSI G45M MB Ultra U12-40739 PCI Expansion Card - 2 SATA Internal Ports, 1.5Gbps, RAID 0, 1, JBOD 6 GB DDR2 Q1. I installed Ubuntu 12.04 LTS and Amahi, for using Grey Hole, last night. The two disc on the raid card do not show up under Ubuntu 12.04LTS but they do show up under grey hole so I know the drives and the raid card are working and there. I need to access them in Ubuntu to format them and place folders on them but I can not see them or figure out how to access them. Q2. Only 4 of the six drives connected to the MB are showing in Ubuntu, but they show as active in grey hole. I also need to access these drives in Ubuntu as this is my storage server. I am new to LINUX so any help you can provide with simple directions will be greatly appreciated . Thank you Mechh69

    Read the article

  • New Dell PE R710 - Storage Question

    - by rihatum
    Hi All, Dell PE R710, received from Dell in the following state : Windows Disk 0 1800GB ( Volume C & D ) Windows Disk 1 526 GB (Volume E ) Perc6i Integrated Raid Controller 6 x 500GB Nearline SAS 7200RPM HDDs Raid 5 Configuration with two Virtual Disks I have installed Dell open Manage and it shows the following : Virtual Disk 0 - State : Background Initialization ( 7% ) Virtual Disk 1 - State : Background Initialization ( 25% ) Now when I click on Virtual Disk 0 it shows me all 6 Disks and the same happens when I click on Virtual Disk 1 it displays all 6 disks. But when I click on Storage Perc6i Connector 0 I get 4 Physical disks with the following numbers : Physical Disk 0:0:0 Physical Disk 0:0:1 Physical Disk 0:0:2 Physical Disk 0:0:3 When I click on Storage Perc6i Connector 1 I get 2 Physical Disks Listed in the following way : Physical Disk 1:0:4 Physical Disk 1:0:5 I am a little confused in this description, does this 1:0:4 interprets to Controller1, Disk4. Does this integrated raid card have two controllers coming out of it ? Also, When I first switched on the machine, the boot partition was showing 1GB Available out of 40GB, now its showing 38GB available out of 40GB. Is this because the Virtual Disks are still Initializing ? Any recommendations or suggestions ? Also, this server have 6 x 500GB NearLine SAS Hard drives, what would be a good raid config ? We are planning to use it for Hyper-V with quite a few (7 or 8) virtual servers, your suggestions would be helpful. Also, while the virtual disks are in a initialization state, can I destroy and re-create the raid configuration ? I would have to do it at the BIOS CTRL-M ? Thanks and Regards

    Read the article

  • Request-local storage in ASP.NET (accessible to the code from IHttpModule implementation)

    - by IgorK
    I need to have some object hanging around between two events I'm interested in: PreRequestHandlerExecute (where I create an instance of my object and want to save it) and PostRequestHandlerExecute (where I want to get to the object). After the second event the object is not needed for my purposes and should be discarded either by storage or my explicit action. So the ideal context where my object should be stored is per request (with guaranteed no sharing issues when different threads are serving requests... or processes/servers :) ) Take into account that actual implementation I can do is being made from a HttpModule and is supposed to be a pluggable solution for already written web apps (so the option to provide some state using static/instance variables in Global.asax doesn't look good - I will have to modify Global.asax on every web application). Cache seems to be too broad for this use. I tried to see whether httpContext.Application (of type HttpApplicationState) is good for me or not, but cannot get whether it is exactly per HttpApplication instance or not (AFAIK you can have several instances of HttpApplications used on different threads and therefore serving several requests simultaneously - then using storage shared between threads will not work correctly; otherwise I would use it because one HttpApplication instance serves exactly one request at a time). Something could be done with storing state on the HttpModule instances if I know for sure that it's exactly bound 1-to-1 with every HttpApplication instance running (but again I need a proof that HttpApplication instance is 1-to-1 with my HttpModule's instance). Any valuable and reputable links on these topics are much appreciated... Would be great to find something particularly well-suited for per request situation (because otherwise I may end up with something ulgy... probably either some 'broader' scoped storage and some hacks to have different keys in the storage for different requests, OR using a thread-local thing and in this way commit to the theory that IIS/ASP.NET will not ever serve first event from one thread and the second event from the other thread and so on)

    Read the article

  • Convert Google Analytics cookies to Local/Session Storage

    - by David Murdoch
    Google Analytics sets 4 cookies that will be sent with all requests to that domain (and ofset its subdomains). From what I can tell no server actually uses them directly; they're only sent with __utm.gif as a query param. Now, obviously Google Analytics reads, writes and acts on their values and they will need to be available to the GA tracking script. So, what I am wondering is if it is possible to: rewrite the __utm* cookies to local storage after ga.js has written them delete them after ga.js has run rewrite the cookies FROM local storage back to cookie form right before ga.js reads them start over Or, monkey patch ga.js to use local storage before it begins the cookie read/write part. Obviously if we are going so far out of the way to remove the __utm* cookies we'll want to also use the Async variant of Analytics. I'm guessing the down vote was because I didn't ask a question. DOH! My questions are: Can it be done as described above? If so, why hasn't it been done? I have a default HTML/CSS/JS boilerplate template that passes YSlow, PageSpeed, and Chrome's Audit with near perfect scores. I'm really looking for a way to squeeze those remaining cookie bytes from Google Analytics in browsers that support local storage.

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Home Server: cpu virtualisation, what to choose?

    - by Huygens
    I'm looking for virtualisation solutions for storage and OS for a home server. A sort of private cloud where I manage the storage space independently of the VM one. This question focus on VM (or compute instance) management and what would best suit my needs. (I have another question related to the storage management). My use cases are: A backup server: rsync and other services running. A personal cloud server: a kind of owned dropbox system, à la ownCloud. " users foreseen. A media server: streaming videos and displaying photos. Here my environement and wishes: Server: HP Proliant MicroServer with 8 GB RAM (AMD Turion dual core with AMD-V technology) OS types: only Linux (perhaps a *BSD VM in the future) Linux distributions do not matter, I'm familiar with RHEL, Fedora, Suse, Ubuntu, but any other recommandation will be fine 2-3 VMs foreseen: backup server, owncloud server and media server (optional). Those are only servers, so no graphical console needed (I don't need VirtualBox) By VM I mean a virtualised environment like KVM, Xen, etc. or a compute instance like with OpenStack storage should be "virtualised/cloudified" see my other question. VM should be able to be migrated to another server in the future if performance cannot be fullfilled anymore by the current server It does not matter if installation of such setup is complicated as long as management tools allow for easy maintenance I don't have Windows at home, so solution should be Linux friendly and would be nice to be web based. But native apps are OK too. System should be easy to enhance: by adding a new server to migate some of the VMs to it. So it's really a kind of private cloud on which I could run some Linux OS. I would prefer free (libre, as in a free speach) and open source tools. But it does not have to be free as in a free beer. So Xen, KVM, VitualBox or OpenStack? What would you recommend?

    Read the article

  • How to resolve virtual disk degraded in Windows Server 2012

    - by harrydev
    I am using the new Storage Spaces feature in Windows Server 2012. I have the following disks: FriendlyName CanPool OperationalStatus HealthStatus Usage Size ------------ ------- ----------------- ------------ ----- ---- PhysicalDisk2 False OK Healthy Auto-Select 2.73 TB PhysicalDisk3 False OK Healthy Auto-Select 2.73 TB PhysicalDisk4 False OK Healthy Auto-Select 2.73 TB PhysicalDisk5 False OK Healthy Auto-Select 2.73 TB There is also a separate OS disk. The above disks are part of a single storage pool: FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly ------------ ----------------- ------------ ------------ ---------- Pool OK Healthy False False Within this storage pool some virtual disks are defined, see below: FriendlyName ResiliencySettingNa OperationalStatus HealthStatus IsManualAttach Size me ------------ ------------------- ----------------- ------------ -------------- ---- Docs Mirror OK Healthy False 500 GB Data Mirror Degraded Warning False 500 GB Work Mirror Degraded Warning False 2 TB Now the virtual disks are all running normal 2-way mirror, but two of the virtual disks are degraded. This is probably because one of the physical disks was offline for a short period of time. However, now the virtual disk cannot be repaired, even though, all physical disks are healthy. There is plenty of available space in the storage pool. This I cannot understand so I was hoping for some help, on how to resolve this? Below I have listed the full output from the Get-VirtualDisk CmdLet for the "Work" disk: ObjectId : {XXXXXXXX} PassThroughClass : PassThroughIds : PassThroughNamespace : PassThroughServer : UniqueId : XXXXXXXX Access : Read/Write AllocatedSize : 412316860416 DetachedReason : None FootprintOnPool : 824633720832 FriendlyName : Work HealthStatus : Warning Interleave : 262144 IsDeduplicationEnabled : False IsEnclosureAware : False IsManualAttach : False IsSnapshot : False LogicalSectorSize : 512 Name : NameFormat : NumberOfAvailableCopies : 0 NumberOfColumns : 2 NumberOfDataCopies : 2 OperationalStatus : Degraded OtherOperationalStatusDescription : OtherUsageDescription : Disk for data being worked on (not backed up) ParityLayout : PhysicalDiskRedundancy : 1 PhysicalSectorSize : 4096 ProvisioningType : Thin RequestNoSinglePointOfFailure : True ResiliencySettingName : Mirror Size : 2199023255552 UniqueIdFormat : Vendor Specific UniqueIdFormatDescription : Usage : Other PSComputerName :

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • How can I hard reset a USB device?

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on superuser.com as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • Inverted LACK Table Serves as a Perfect Gear Rack [DIY]

    - by Jason Fitzpatrick
    We’ve seen IKEA gear hacked to hold audio and computer gear before, but this mod adds in a simple and effective twist. LACK end tables are, conveniently, the same width as a standard server rack. This makes it super simple for DIYers to mount their gear right into the legs of the table with no modification necessary. In this case, however, Winston Smith included a clever update to the mod. Rather than leave it like a standard table, he flipped the table upside down for increased stability and a stronger connection between the legs of his improvised audio rack and the table-top-turned-floor-plate. He then finished it with a matching LACK shelf piece to serve as a turn-table stand. His gear is stored cleanly, off the floor, and in a sturdy container all for about $25–a definite bargain when it comes to storage racks. Hit up the link below for more information and pictures. LACK Rack & EXPEDIT Desktop [IKEA Hackers] HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • Dropbox Doubles Referral Credit; Score 500MB for Each Friend You Refer

    - by Jason Fitzpatrick
    Dropbox is doubling the amount of free storage you get per-referral to 500MB, doubling the previous 250MB credit–better yet, the bonus is retroactive and applies to referrals you’ve already made. From the DropBox blog: How much space is that, exactly? For every friend you invite that installs Dropbox, you’ll both get 500 MB of free space. If you’ve got a free account, you can invite up to 32 people for a whopping total of 16 GB of extra space. Pro accounts now earn 1 GB per referral, for a total of 32 GB of extra space. Have you already invited a bunch of people? Don’t worry. Within a few days, you’ll get full credit for every referral that’s already been completed. Boom! Hit up the link below for the full announcement. Dropbox Referrals Now Twice As Nice [Dropbox] How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2 How to Own Your Own Website (Even If You Can’t Build One) Pt 1

    Read the article

  • Silverlight local storage

    - by IMHO
    As you may know Silverlight has support for local storage. We are looking at creating Sl application that will work in off line mode. This application may require quite a bit of data to be cached on the client side. Obvious solution - use local storage with some sort of XMl based structure won't work as our PoC showed due to performance issues. We are looking at several 3rd party solutions that implement light database engines on top of SL local storage. If you have solved this problem in the past or have any ideas - I would appreciate some pointers and ideas.

    Read the article

  • Is the usage of Isolated Storage in Silverlight 3 a security concern

    - by Prashant
    I am using Silverlight 3 on my website. I have a Login Page for role based authentication, that routes users with different privileges to different parts of the website. I want to use something analogous to the Session Variables available in standard ASP.Net applications. I intend to use Isolated Storage to achieve this. But I am skeptical about security in this option, as the Isolated Storage exists on the client side, and can be manipulated on client side. I am new to the Isolated Storage concept and don't know about the security options provided by it in terms of Encryption and server-side validation etc. If any of you have used it or are aware of the security provided in this case, could you please shed some light on the same. Thanks

    Read the article

  • Google chrome extension: local storage

    - by Rohan
    Hi there I'm developing an extension for Google Chrome, and have run into some trouble.I created an options.html page and added it to the manifest.json file.The page shows properly. I saved the options, and then went back to the page on which the extension is supposed to run. Unfortunately, the Local storage for the options was returning a 'null' instead of the option. If I set the local storage option directly from the extension's JS script, it works fine but not if it was set from the options page. Any idea how i can access the options.html local storage values from my Javascript file in the extension? thanks!

    Read the article

  • Looking for a safe, portable password-storage method

    - by Maciek
    Hello, I'm working on C++ project that is supposed to run on both Win32 and Linux, the software is to be deployed to small computers, usually working in remote locations. Recently, our client has requested that we introduce access control via password protection. We are to meet the following criteria : Support remote login Support remote password change Support remote password retrieval Support data retrieval on accidental/purposeful deletion Support secure storage I'm capable of meeting the "remote" requirements using an existing library, however what I do need to consider is a method of storing this data, preferably in a way that will work on both platforms and will not let the user see it/read it, encryption is not the issue here - it's the storage method itself. Can anyone recommend a sage storage method that could help me meet those criteria?

    Read the article

  • Retrieve Storage and Programs Memory on .NET Compact Framework 2 and WM5

    - by wintermute
    Hi! I've been looking for quite a while already and still couldn't find a solution for this. All I need is to retrieve the memory levels and percentage of use. OpenNETCF has a MemoryManagement class, which seems to encapsulates a data structure returned through a P/Invoke or something like that, and it gives me the TotalPhysicalMemory, TotalVirtualMemory, AvailablePhisicalMemory and such, but those do not directly relate to Storage and Programs, nor could I find a way to "convert" these attributes to those I need. Has anyone there already done this? It must be easy, I just need the very same values available on Settings System Memory. Thanks in advance! edit: I'm already being able to retrieve Available and total Storage memory through the GetDiskFreeSpaceEx P/Invoke. Since Storage and Programs memory seem to rely into the same hardware, maybe it's just a case of finding out what path to pass as the method's first parameter.

    Read the article

  • Isolated storage

    - by Costa
    Hi I am not sure that I understand Isolated storage. I read the article http://msdn.microsoft.com/en-us/library/3ak841sy%28VS.80%29.aspx 1) Why I don't just use App data folder? 2) In the link above : "With isolated storage, data is always isolated by user and by assembly. Credentials such as the origin or the strong name of the assembly determine assembly identity. Data can also be isolated by application domain, using similar credentials." I can't think about a scenario that makes this future important. In general I don't understand the philosophy and the need of "isolated storage" which inspire MS to create such a thing. Thanks

    Read the article

  • Storage device not found on ESX4 with AIC-9410

    - by Mads
    I am trying to install ESX 4.0 update 1 on a Supermicro X7DBR-3 system with an embedded AIC-9410 HBA (this HBA is listed on the HCG with Vendor ID 9005 and Device ID 041f) . All SATA controllers are disabled in the BIOS and the logical drive shows up in the Adaptec device summary during POST, however there is nothing listed on the Storage Device screen. The HBA itself is listed if I run esxcfg-info but not if I run esxcfg-scsidevs -a (under ESXi for that last command) Any ideas where I can look next or what might be wrong?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >