Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 27/361 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Introduction to Oracle’s New StorageTek SL150 Modular Tape Library

    - by Cinzia Mascanzoni
    Join the product announcement webcast on Thursday July 12, 2012 at 3pm CET (2pm GMT). This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, the SL150 Modular Tape Library. This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast.Register NOW!

    Read the article

  • Data, Log and Temp file placement

    - by jchang
    First especially for all the people with SAN storage, drive letters are of no consequence. What matters is the actual physical disk layout. Forget capacity, pay attention to the number of spindles supporting each RAID group. If the RAID group is shared with other application, make sure there the SLA guarantees read and write latency. One very large company conducted a stress test in the QA environment. The SAN admin carved the LUNs from the same pool of disks as production, but thought he had a really...(read more)

    Read the article

  • How to use Azure storage for uploading and displaying pictures.

    - by Magnus Karlsson
    Basic set up of Azure storage for local development and production. This is a somewhat completion of the following guide from http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ that also involves a practical example that I believe is commonly used, i.e. upload and present an image from a user.   First we set up for local storage and then we configure for them to work on a web role. Steps: 1. Configure connection string locally. 2. Configure model, controllers and razor views.   1. Setup connectionsstring 1.1 Right click your web role and choose “Properties”. 1.2 Click Settings. 1.3 Add setting. 1.4 Name your setting. This will be the name of the connectionstring. 1.5 Click the ellipsis to the right. (the ellipsis appear when you mark the area. 1.6 The following window appears- Select “Windows Azure storage emulator” and click ok.   Now we have a connection string to use. To be able to use it we need to make sure we have windows azure tools for storage. 2.1 Click Tools –> Library Package manager –> Manage Nuget packages for solution. 2.2 This is what it looks like after it has been added.   Now on to what the code should look like. 3.1 First we need a view which collects images to upload. Here Index.cshtml. 1: @model List<string> 2:  3: @{ 4: ViewBag.Title = "Index"; 5: } 6:  7: <h2>Index</h2> 8: <form action="@Url.Action("Upload")" method="post" enctype="multipart/form-data"> 9:  10: <label for="file">Filename:</label> 11: <input type="file" name="file" id="file1" /> 12: <br /> 13: <label for="file">Filename:</label> 14: <input type="file" name="file" id="file2" /> 15: <br /> 16: <label for="file">Filename:</label> 17: <input type="file" name="file" id="file3" /> 18: <br /> 19: <label for="file">Filename:</label> 20: <input type="file" name="file" id="file4" /> 21: <br /> 22: <input type="submit" value="Submit" /> 23: 24: </form> 25:  26: @foreach (var item in Model) { 27:  28: <img src="@item" alt="Alternate text"/> 29: } 3.2 We need a controller to receive the post. Notice the “containername” string I send to the blobhandler. I use this as a folder for the pictures for each user. If this is not a requirement you could just call it container or anything with small characters directly when creating the container. 1: public ActionResult Upload(IEnumerable<HttpPostedFileBase> file) 2: { 3: BlobHandler bh = new BlobHandler("containername"); 4: bh.Upload(file); 5: var blobUris=bh.GetBlobs(); 6: 7: return RedirectToAction("Index",blobUris); 8: } 3.3 The handler model. I’ll let the comments speak for themselves. 1: public class BlobHandler 2: { 3: // Retrieve storage account from connection string. 4: CloudStorageAccount storageAccount = CloudStorageAccount.Parse( 5: CloudConfigurationManager.GetSetting("StorageConnectionString")); 6: 7: private string imageDirecoryUrl; 8: 9: /// <summary> 10: /// Receives the users Id for where the pictures are and creates 11: /// a blob storage with that name if it does not exist. 12: /// </summary> 13: /// <param name="imageDirecoryUrl"></param> 14: public BlobHandler(string imageDirecoryUrl) 15: { 16: this.imageDirecoryUrl = imageDirecoryUrl; 17: // Create the blob client. 18: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 19: 20: // Retrieve a reference to a container. 21: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 22: 23: // Create the container if it doesn't already exist. 24: container.CreateIfNotExists(); 25: 26: //Make available to everyone 27: container.SetPermissions( 28: new BlobContainerPermissions 29: { 30: PublicAccess = BlobContainerPublicAccessType.Blob 31: }); 32: } 33: 34: public void Upload(IEnumerable<HttpPostedFileBase> file) 35: { 36: // Create the blob client. 37: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 38: 39: // Retrieve a reference to a container. 40: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 41: 42: if (file != null) 43: { 44: foreach (var f in file) 45: { 46: if (f != null) 47: { 48: CloudBlockBlob blockBlob = container.GetBlockBlobReference(f.FileName); 49: blockBlob.UploadFromStream(f.InputStream); 50: } 51: } 52: } 53: } 54: 55: public List<string> GetBlobs() 56: { 57: // Create the blob client. 58: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 59: 60: // Retrieve reference to a previously created container. 61: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 62: 63: List<string> blobs = new List<string>(); 64: 65: // Loop over blobs within the container and output the URI to each of them 66: foreach (var blobItem in container.ListBlobs()) 67: blobs.Add(blobItem.Uri.ToString()); 68: 69: return blobs; 70: } 71: } 3.4 So, when the files have been uploaded we will get them to present them to out user in the index page. Pretty straight forward. In this example we only present the image by sending the Uri’s to the view. A better way would be to save them up in a view model containing URI, metadata, alternate text, and other relevant information but for this example this is all we need.   4. Now press F5 in your solution to try it out. You can see the storage emulator UI here:     4.1 If you get any exceptions or errors I suggest to first check if the service Is running correctly. I had problem with this and they seemed related to the installation and a reboot fixed my problems.     5. Set up for Cloud storage. To do this we need to add configuration for cloud just as we did for local in step one. 5.1 We need our keys to do this. Go to the windows Azure menagement portal, select storage icon to the right and click “Manage keys”. (Image from a different blog post though).   5.2 Do as in step 1.but replace step 1.6 with: 1.6 Choose “Manually entered credentials”. Enter your account name. 1.7 Paste your Account Key from step 5.1. and click ok.   5.3. Save, publish and run! Please feel free to ask any questions using the comments form at the bottom of this page. I will get back to you to help you solve any questions. Our consultancy agency also provides services in the Nordic regions if you would like any further support.

    Read the article

  • Accessing host LVM partition from Windows XP through Virt.manager 0.8.5 / Qemu / KVM

    - by Nico de Smidt
    Hi, requested use case is having a Windows XP SP3 guest running in 64bit Ubuntu. (Linux pcs 2.6.35-22-server #35-Ubuntu SMP Sat Oct 16 22:02:33 UTC 2010 x86_64 GNU/Linux) I want this guest to access an LVM LV on the Ubuntu disk. I've setup the following LVM config: --- Logical volume --- LV Name /dev/storage/sdc1 VG Name storage LV UUID Zg5IMC-OlqB-prL5-fgg4-3A9A-OgKP-oZ0QkJ LV Write Access read/write LV Status available # open 0 LV Size 1.01 GiB Current LE 259 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:3 -- 1) I've setup a storage pool for /dev/storage 2) I've mkfs.vfat /dev/storage/sdc1 3) and made a virtual IDE disk in the virt-manager setup for the guest. Target device: IDE Disk 2 Source path: /dev/storage/sdc1 -- Now when running XP (guest) Windows sees a new disk in Disk Manager and want's to install a partition on it, since it believes the drive is empty. After formatting from within Windows I can put data on the new disk volume. -- Back in Ubuntu however I cannot access this this any more since it created a partition within an LVM Logical Volume. Running fdisk -l shows the following: root@pcs:/media# fdisk -l /dev/storage/sdc1 Disk /dev/storage/sdc1: 1086 MB, 1086324736 bytes 32 heads, 63 sectors/track, 1052 cylinders Units = cylinders of 2016 * 512 = 1032192 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8d72e4f4 Device Boot Start End Blocks Id System /dev/storage/sdc1p1 1 1050 1058368+ c W95 FAT32 (LBA) -- which seems fine to me, but when trying to mount /dev/storage/sdc1p1 I get the following error: mount /dev/storage/sdc1p1 /media/xp mount: special device /dev/storage/sdc1p1 does not exist which makes sense since in lvdisplay sdc1p1 does not exist Main question: I want to mount the vfat partition in both Ubuntu and XP What am I missing here????? regards, and thanks for your consideration. Nico

    Read the article

  • Where did my free space go?

    - by Ari B. Friedman
    I have a storage drive (2TB) and an OS drive (90GB SSD). I've run out of space on the OS drive: /$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb1 72G 72G 0 100% / udev 5.9G 12K 5.9G 1% /dev tmpfs 2.4G 1.2M 2.4G 1% /run none 5.0M 0 5.0M 0% /run/lock none 5.9G 428K 5.9G 1% /run/shm /dev/sda1 1.9T 639G 1.2T 37% /media/StorageDrive So be it. But when I attempt to figure out where the space has gone, I cannot find it anything remotely approaching the capacity of the drive: /$ sudo du -h -d 1 du: cannot access `./media/StorageDrive/home/ari/.gvfs': Permission denied 675G ./media 2.3G ./var 0 ./proc 7.0M ./tmp 27M ./boot 4.0K ./lib64 12K ./dev 44M ./home 16K ./lost+found 8.0M ./sbin 223M ./lib 4.0K ./selinux 1.4M ./run 140K ./root 8.8M ./bin 4.0K ./mnt 38M ./etc 8.0K ./srv 4.8G ./usr 65M ./opt 0 ./sys 682G . Note the difference between the total (682G) and the mounted drives in /media (675G) is only about 9G. How are 72G being used? Where is this dark matter hiding?

    Read the article

  • Storing Attendance Data in database

    - by Ali Abbas
    So i have to store daily attendance of employees of my organisation from my application . The part where I need some help is, the efficient way to store attendance data. After some research and brain storming I came up with some approaches . Could you point me out which one is the best and any unobvious ill effects of the mentioned approaches. The approaches are as follows Create a single table for whole organisation and store empid,date,presentstatus as a row for every employee everyday. Create a single table for whole organisation and store a single row for each day with a comma delimited string of empids which are absent. I will generate the string on my application. Create different tables for each department and follow the 1 method. Please share your views and do mention any other good methods

    Read the article

  • mount old ATA disk to USB adapter

    - by 213441265152351
    I am trying to recover data from an old Linux that was installed in a computer on an ATA hard drive. I found a ScanLogic USB-IDE, an ATA adapter to USB 1.0 similar to the one in the picture: and after switching it on, I plugged it into a laptop with Ubuntu 12.04. I am used to the drives being automatically mounted, but this one doesn't show up in /media. After doing a dmesg, all I got is this: [215298.671924] usb 2-1.1: new full-speed USB device number 5 using ehci_hcd [215298.767330] scsi19 : usb-storage 2-1.1:1.0 [215299.841701] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd [215300.017258] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd [215300.197050] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd [215300.372730] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd I tried plugging in the adapter to the three different USB ports in my laptop (one of them USB 3.0), but got no luck with any of them. Any ideas?

    Read the article

  • What are the Crappy Code Games - Tips on how to win?

    - by simonsabin
    This is part of a series on the Crappy Code Games The background Who can enter? What are the challenges? What are the prizes? Why should I attend? Tips on how to win Tips on how to win Each test has some different aspect that will define how you win. In this post we will give you some tips on how to try and win. As a starter why not watch some of the sessions from previous SQLBits Storage sessions Sessions on IO The background Who can enter? What are the challenges? What are the prizes? Why should...(read more)

    Read the article

  • Photo sharing social platform [closed]

    - by user1696497
    I am working on a photo sharing social platform like Flickr, Photobucket. To start off with I have half a million photos as of now. I want to convert all of these into a single format, compression ratio and use it as an original image. I will be storing original image, re-sized image according to layout and a thumbnail. I have started off with ruby, didn't find supporting libraries. I am considering python as it has a good image processing library and instagram is using it. I want some advise about how the image has to be processed while uploading, efficient way of storage whether database or a file system, image compressions, and precautions to be taken. I would be having profile pictures, do I need store them separately or along with the images? If I want to store the images on a file system, which file system should I use and also should I store the url or should I use any intermediate key value store like redis?

    Read the article

  • Drives will not show up on PCI RAID card in Ubuntu 12.04 LTS

    - by Mechh69
    Computer specs Intel E8200 Dual Core MSI G45M MB Ultra U12-40739 PCI Expansion Card - 2 SATA Internal Ports, 1.5Gbps, RAID 0, 1, JBOD 6 GB DDR2 Q1. I installed Ubuntu 12.04 LTS and Amahi, for using Grey Hole, last night. The two disc on the raid card do not show up under Ubuntu 12.04LTS but they do show up under grey hole so I know the drives and the raid card are working and there. I need to access them in Ubuntu to format them and place folders on them but I can not see them or figure out how to access them. Q2. Only 4 of the six drives connected to the MB are showing in Ubuntu, but they show as active in grey hole. I also need to access these drives in Ubuntu as this is my storage server. I am new to LINUX so any help you can provide with simple directions will be greatly appreciated . Thank you Mechh69

    Read the article

  • New Dell PE R710 - Storage Question

    - by rihatum
    Hi All, Dell PE R710, received from Dell in the following state : Windows Disk 0 1800GB ( Volume C & D ) Windows Disk 1 526 GB (Volume E ) Perc6i Integrated Raid Controller 6 x 500GB Nearline SAS 7200RPM HDDs Raid 5 Configuration with two Virtual Disks I have installed Dell open Manage and it shows the following : Virtual Disk 0 - State : Background Initialization ( 7% ) Virtual Disk 1 - State : Background Initialization ( 25% ) Now when I click on Virtual Disk 0 it shows me all 6 Disks and the same happens when I click on Virtual Disk 1 it displays all 6 disks. But when I click on Storage Perc6i Connector 0 I get 4 Physical disks with the following numbers : Physical Disk 0:0:0 Physical Disk 0:0:1 Physical Disk 0:0:2 Physical Disk 0:0:3 When I click on Storage Perc6i Connector 1 I get 2 Physical Disks Listed in the following way : Physical Disk 1:0:4 Physical Disk 1:0:5 I am a little confused in this description, does this 1:0:4 interprets to Controller1, Disk4. Does this integrated raid card have two controllers coming out of it ? Also, When I first switched on the machine, the boot partition was showing 1GB Available out of 40GB, now its showing 38GB available out of 40GB. Is this because the Virtual Disks are still Initializing ? Any recommendations or suggestions ? Also, this server have 6 x 500GB NearLine SAS Hard drives, what would be a good raid config ? We are planning to use it for Hyper-V with quite a few (7 or 8) virtual servers, your suggestions would be helpful. Also, while the virtual disks are in a initialization state, can I destroy and re-create the raid configuration ? I would have to do it at the BIOS CTRL-M ? Thanks and Regards

    Read the article

  • Request-local storage in ASP.NET (accessible to the code from IHttpModule implementation)

    - by IgorK
    I need to have some object hanging around between two events I'm interested in: PreRequestHandlerExecute (where I create an instance of my object and want to save it) and PostRequestHandlerExecute (where I want to get to the object). After the second event the object is not needed for my purposes and should be discarded either by storage or my explicit action. So the ideal context where my object should be stored is per request (with guaranteed no sharing issues when different threads are serving requests... or processes/servers :) ) Take into account that actual implementation I can do is being made from a HttpModule and is supposed to be a pluggable solution for already written web apps (so the option to provide some state using static/instance variables in Global.asax doesn't look good - I will have to modify Global.asax on every web application). Cache seems to be too broad for this use. I tried to see whether httpContext.Application (of type HttpApplicationState) is good for me or not, but cannot get whether it is exactly per HttpApplication instance or not (AFAIK you can have several instances of HttpApplications used on different threads and therefore serving several requests simultaneously - then using storage shared between threads will not work correctly; otherwise I would use it because one HttpApplication instance serves exactly one request at a time). Something could be done with storing state on the HttpModule instances if I know for sure that it's exactly bound 1-to-1 with every HttpApplication instance running (but again I need a proof that HttpApplication instance is 1-to-1 with my HttpModule's instance). Any valuable and reputable links on these topics are much appreciated... Would be great to find something particularly well-suited for per request situation (because otherwise I may end up with something ulgy... probably either some 'broader' scoped storage and some hacks to have different keys in the storage for different requests, OR using a thread-local thing and in this way commit to the theory that IIS/ASP.NET will not ever serve first event from one thread and the second event from the other thread and so on)

    Read the article

  • Convert Google Analytics cookies to Local/Session Storage

    - by David Murdoch
    Google Analytics sets 4 cookies that will be sent with all requests to that domain (and ofset its subdomains). From what I can tell no server actually uses them directly; they're only sent with __utm.gif as a query param. Now, obviously Google Analytics reads, writes and acts on their values and they will need to be available to the GA tracking script. So, what I am wondering is if it is possible to: rewrite the __utm* cookies to local storage after ga.js has written them delete them after ga.js has run rewrite the cookies FROM local storage back to cookie form right before ga.js reads them start over Or, monkey patch ga.js to use local storage before it begins the cookie read/write part. Obviously if we are going so far out of the way to remove the __utm* cookies we'll want to also use the Async variant of Analytics. I'm guessing the down vote was because I didn't ask a question. DOH! My questions are: Can it be done as described above? If so, why hasn't it been done? I have a default HTML/CSS/JS boilerplate template that passes YSlow, PageSpeed, and Chrome's Audit with near perfect scores. I'm really looking for a way to squeeze those remaining cookie bytes from Google Analytics in browsers that support local storage.

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Efficient algorithm for finding largest eigenpair of small general complex matrix

    - by mklassen
    I am looking for an efficient algorithm to find the largest eigenpair of a small, general (non-square, non-sparse, non-symmetric), complex matrix, A, of size m x n. By small I mean m and n is typically between 4 and 64 and usually around 16, but with m not equal to n. This problem is straight forward to solve with the general LAPACK SVD algorithms, i.e. gesvd or gesdd. However, as I am solving millions of these problems and only require the largest eigenpair, I am looking for a more efficient algorithm. Additionally, in my application the eigenvectors will generally be similar for all cases. This lead me to investigate Arnoldi iteration based methods, but I have neither found a good library nor algorithm that applies to my small general complex matrix. Is there an appropriate algorithm and/or library?

    Read the article

  • Problem with Efficient Gridview paging without datasource control

    - by Ronnie Overby
    I am trying to do efficient paging with a gridview without using a datasource control. By efficient, I mean I only retrieve the records that I intend to show. I am trying to use the PagerTemplate to build my pager functionality. In short, the problem is that if I bind only the records that I intend to show on the current page, the gridview doesn't render its pager template, so I don't get the paging controls. It's almost as if I MUST bind more records than I intend to show on a given page, which is not something I want to do.

    Read the article

  • Efficient way to calculate byte length of a character, depending on the encoding

    - by BalusC
    What's the most efficient way to calculate the byte length of a character, taking the character encoding into account? In UTF-8 for example the characters have a variable byte length, so each character needs to be determined individually. As far now I've come up with this: char c = getItSomehow(); String encoding = "UTF-8"; int length = new String(new char[] { c }).getBytes(encoding).length; But this is clumsy and inefficient in a loop since a new String needs to be created everytime. I can't find other and more efficient ways in the Java API. I imagine that this can be done with bitwise operations like bit shifting, but that's my weak point and I'm unsure how to take the encoding into account here :) If you question the need for this, check this topic.

    Read the article

  • What is the most efficient Java Collections library?

    - by dehmann
    What is the most efficient Java Collections library? A few years ago, I did a lot of Java and had the impression back then that trove is the best (most efficient) Java Collections implementation. But when I read the answers to the question "Most useful free Java libraries?" I noticed that trove is hardly mentioned. So which Java Collections library is best now? UPDATE: To clarify, I mostly want to know what library to use when I have to store millions of entries in a hash table etc. (need a small runtime and memory footprint).

    Read the article

  • Home Server: cpu virtualisation, what to choose?

    - by Huygens
    I'm looking for virtualisation solutions for storage and OS for a home server. A sort of private cloud where I manage the storage space independently of the VM one. This question focus on VM (or compute instance) management and what would best suit my needs. (I have another question related to the storage management). My use cases are: A backup server: rsync and other services running. A personal cloud server: a kind of owned dropbox system, à la ownCloud. " users foreseen. A media server: streaming videos and displaying photos. Here my environement and wishes: Server: HP Proliant MicroServer with 8 GB RAM (AMD Turion dual core with AMD-V technology) OS types: only Linux (perhaps a *BSD VM in the future) Linux distributions do not matter, I'm familiar with RHEL, Fedora, Suse, Ubuntu, but any other recommandation will be fine 2-3 VMs foreseen: backup server, owncloud server and media server (optional). Those are only servers, so no graphical console needed (I don't need VirtualBox) By VM I mean a virtualised environment like KVM, Xen, etc. or a compute instance like with OpenStack storage should be "virtualised/cloudified" see my other question. VM should be able to be migrated to another server in the future if performance cannot be fullfilled anymore by the current server It does not matter if installation of such setup is complicated as long as management tools allow for easy maintenance I don't have Windows at home, so solution should be Linux friendly and would be nice to be web based. But native apps are OK too. System should be easy to enhance: by adding a new server to migate some of the VMs to it. So it's really a kind of private cloud on which I could run some Linux OS. I would prefer free (libre, as in a free speach) and open source tools. But it does not have to be free as in a free beer. So Xen, KVM, VitualBox or OpenStack? What would you recommend?

    Read the article

  • An efficient code to determine if a set is a subset of another set

    - by Edward
    I am looking for an efficient way to determine if a set is a subset of another set in Matlab or Mathematica. Example: Set A = [1 2 3 4] Set B = [4 3] Set C = [3 4 1] Set D = [4 3 2 1] The output should be: Set A Sets B and C belong to set A because A contains all of their elements, therefore, they can be deleted (the order of elements in a set doesn't matter). Set D has the same elements as set A and since set A precedes set D, I would like to simply keep set A and delete set D. So there are two essential rules: 1. Delete a set if it is a subset of another set 2. Delete a set if its elements are the same as those of a preceding set My Matlab code is not very efficient at doing this - it mostly consists of nested loops. Suggestions are very welcome! Additional explanation: the issue is that with a large number of sets there will be a very large number of pairwise comparisons.

    Read the article

  • Efficient way to ASCII encode UTF-8

    - by Andreas Gohr
    I'm looking for a simple and efficient way to store UTF-8 strings in ASCII-7. With efficient I mean the following: all ASCII chars in the input should stay ASCII chars in the output the resulting string should be as short as possible the operation needs to be reversable without any data loss there should be no restriction on the input length the whole UTF-8 range should be allowed My first idea was to use Punycode (IDNA) as it fits the first three requirements, but it fails at the last two. Can anyone recommend an alternative encoding scheme? Even better if there's some code available to look at.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >