Search Results

Search found 5416 results on 217 pages for 'storage'.

Page 10/217 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • ??????Sun ZFS Storage Appliance?????????????????·?????????

    - by Norihito Yachita
    ??????????????·????????Sun ZFS Storage 7320 Appliance??????????????????????????????????????IaaS(Infrastructure as a Service)???????????????????????????????????????????????????????? ???????????11?15?????????????4,000?????????????????????????????????????????????????????????·??????????????????????????????? ???????????????????????????????????????????????????????????????????????????????????????????????????InfiniBand???????????????????Sun ZFS Storage 7320 Appliance?????2011?5??????????6????????????????????????Sun ZFS Storage 7320 Appliance???????·???????

    Read the article

  • SAS Expanders vs Direct Attached (SAS)?

    - by jemmille
    I have a storage unit with 2 backplanes. One backplane holds 24 disks, one backplane holds 12 disks. Each backplane is independently connected to a SFF-8087 port (4 channel/12Gbit) to the raid card. Here is where my question really comes in. Can or how easily can a backplane be overloaded? All the disks in the machine are WD RE4 WD1003FBYX (black) drives that have average writes at 115MB/sec and average read of 125 MB/sec I know things would vary based on the raid or filesystem we put on top of that but it seems to be that a 24 disk backplane with only one SFF-8087 connector should be able to overload the bus to a point that might actually slow it down? Based on my math, if I had a RAID0 across all 24 disks and asked for a large file, I should, in theory should get 24*115 MB/sec wich translates to 22.08 GBit/sec of total throughput. Either I'm confused or this backplane is horribly designed, at least in a perfomance environment. I'm looking at switching to a model where each drive has it's own channel from the backplane (and new HBA's or raid card). EDIT: more details We have used both pure linux (centos), open solaris, software raid, hardware raid, EXT3/4, ZFS. Here are some examples using bonnie++ 4 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 194MB/s 19% 92MB/s 11% 200MB/s 8% 310/sec 194MB/s 19% 93MB/s 11% 201MB/s 8% 312/sec --------- ---- --------- ---- --------- ---- --------- 389MB/s 19% 186MB/s 11% 402MB/s 8% 311/sec 8 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 324MB/s 32% 164MB/s 19% 346MB/s 13% 466/sec 324MB/s 32% 164MB/s 19% 348MB/s 14% 465/sec --------- ---- --------- ---- --------- ---- --------- 648MB/s 32% 328MB/s 19% 694MB/s 13% 465/sec 12 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 377MB/s 38% 191MB/s 22% 429MB/s 17% 537/sec 376MB/s 38% 191MB/s 22% 427MB/s 17% 546/sec --------- ---- --------- ---- --------- ---- --------- 753MB/s 38% 382MB/s 22% 857MB/s 17% 541/sec Now 16 Disk RAID-0, it's gets interesting WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 359MB/s 34% 186MB/s 22% 407MB/s 18% 1397/sec 358MB/s 33% 186MB/s 22% 407MB/s 18% 1340/sec --------- ---- --------- ---- --------- ---- --------- 717MB/s 33% 373MB/s 22% 814MB/s 18% 1368/sec 20 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 371MB/s 37% 188MB/s 22% 450MB/s 19% 775/sec 370MB/s 37% 188MB/s 22% 447MB/s 19% 797/sec --------- ---- --------- ---- --------- ---- --------- 741MB/s 37% 376MB/s 22% 898MB/s 19% 786/sec 24 Disk RAID-1, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 347MB/s 34% 193MB/s 22% 447MB/s 19% 907/sec 347MB/s 34% 192MB/s 23% 446MB/s 19% 933/sec --------- ---- --------- ---- --------- ---- --------- 694MB/s 34% 386MB/s 22% 894MB/s 19% 920/sec 28 Disk RAID-0, ZFS 32 Disk RAID-0, ZFS 36 Disk RAID-0, ZFS More details: Here is the exact unit: http://www.supermicro.com/products/chassis/4U/847/SC847E1-R1400U.cfm

    Read the article

  • What kind of storage do people actually use for VMware ESX servers?

    - by Dirk Paessler
    VMware and many network evangelists try to tell you that sophisticated (=expensive) fiber SANs are the "only" storage option for VMware ESX and ESXi servers. Well, yes, of course. Using a SAN is fast, reliable and makes vMotion possible. Great. But: Can all ESX/ESXi users really afford SANs? My theory is that less than 20% of all VMware ESX installations on this planet actually use fiber or iSCS SANs. Most of these installation will be in larger companies who can afford this. I would predict that most VMware installations use "attached storage" (vmdks are stored on disks inside the server). Most of them run in SMEs and there are so many of them! We run two ESX 3.5 servers with attached storage and two ESX 4 servers with an iSCS san. And the "real live difference" between both is barely notable :-) Do you know of any official statistics for this question? What do you use as your storage medium?

    Read the article

  • Does cloud storage replicate the data over many datacenters if so it means i benefit content delive

    - by Berkay
    Let's assume that i want to use cloud storage service from one of the cloud storage provider, i got X gb structured and unstructured data and i will use this data as my contents of my interactive web page. And now i have some doubts about this point.I have many users and they are all visiting my web page from various countries.To be more specific first; does my data stored only of the Cloud Storage data center ? or Does my data replicated over many data centers of my provider? second if so; how can i benefit from content delivery network? (matching and placing users’ content nearest storage data centers)

    Read the article

  • How do I replace a harddrive that is in a two-way mirror storage space on Windows 8?

    - by Jon
    I have a storage space in Windows 8 doing a two-way mirror on three harddrives. The sizes are 297GB, 189GB, and 70GB. I would like to replace the 70GB HD with a larger one. My thought was to remove that drive from the space via the Storage Space control panel, shutdown, replace HD with bigger drive, reboot, add new HD to the storage space. I can't find any options to remove a HD from a storage space in the control panel. Should I just shutdown and swap out the small drive or is there another process for safely replacing the old HD? (By the way, the old HD is still operational.)

    Read the article

  • Azure storage - double decimal point ignored on save

    - by Fabio Milheiro
    I have a value that is correctly stored in a property of an object, but when I save the changes to the Azure storage database, the double value is stored to the database ignoring the point (7.1000000003 is saved as 711). Also, the property is changed to 711.0. How do I solve this problem? The field is already set to double in the class and the database table.

    Read the article

  • How does a portable Thread Specific Storage Mechanism's Naming Scheme Generate Thread Relative Uniqu

    - by Hassan Syed
    A portable thread specific storage reference/identity mechanism, of which boost/thread/tss.hpp is an instance, needs a way to generate a unique keys for itself. This key is unique in the scope of a thread, and is subsequently used to retrieve the object it references. This mechanism is used in code written in a thread neutral manner. Since boost is a portable example of this concept, how specifically does such a mechanism work ?

    Read the article

  • Optimal xml storage engine

    - by nixau
    I'm considering optimal open source solution for storing xml documents with further querying on them effectively. Amount of data will be small. As far as I understand native xml databases might form a proper solution for my case. They obviously store xml documents in highly efficient way. It would be great to learn your experience. Any suggestions on proper solution? Have you got any experience employing xml storage engines in your apps?

    Read the article

  • Image upload storage strategies

    - by MatW
    When a user uploads an image to my site, the image goes through this process; user uploads pic store pic metadata in db, giving the image a unique id async image processing (thumbnail creation, cropping, etc) all images are stored in the same uploads folder So far the site is pretty small, and there are only ~200,000 images in the uploads directory. I realise I'm nowhere near the physical limit of files within a directory, but this approach clearly won't scale, so I was wondering if anyone had any advice on upload / storage strategies for handling large volumes of image uploads.

    Read the article

  • Simplest Azure Storage Manipulation possible

    - by Hurricanepkt
    I have the need to integrate some blob storage into an existing ASP.NET Mvc site my hope is to be able to just add some references and then just do puts and gets but I cannot find any simple example for how to do this (that hasn't been depricated to the point it no longer works) I have tried using StorageClient but CreateCloudBlobClient() doesn't seem to work.

    Read the article

  • Right way to access the Google Cloud Storage bucket via Public API

    - by SyBer
    I'm trying the following request to access the bucket by using curl, via the public API: curl -X POST -H 'Content-Type: image/jpeg' -d @xxx.jpeg 'https://www.googleapis.com/upload/storage/v1/b/clips.eyecam.com/o?uploadType=media&name=x.jpeg&key=XXX' With XXX being the generated key in the Public API. However I'm getting an authorization failure: { "error": { "errors": [ { "domain": "global", "reason": "required", "message": "Login Required", "locationType": "header", "location": "Authorization" } ], "code": 401, "message": "Login Required" } } Seems the request is incorrect and does not pass the authorization key, any idea what would be the right form of the request?

    Read the article

  • Structured Storage

    - by user342735
    Hi All, I have a file that is in structured storage format. I was wondering if this format be accessed concurrently by threads. Meaning have multiple threads read the different streams process it at once. The objective is to load the file faster. When i refer to a file i refer one that represents CAD information. Thank you.

    Read the article

  • what's the best storage for text

    - by maryam
    Hi, I have an application that just only use for show information and search data. and my datatype is text and has larg size. would you please tell me what's the best storage for it. also I don't want to use SQL database. thaks

    Read the article

  • Converting a large SQL Server Database to Azure Storage

    - by Laith
    Hi guys, I have a very large database structure, (Data is not important at this point, I can migrate the info in the db pretty easily if the structure is done) , all reside in SQL Server and I even published it to SQL Azure, but thinking about the limitation of SQL Azure in size, made me decide to switch most of the tables that do not need all the bells and whistles of SQL Azure to Azure Table and blob storage. I was thinking of creating a TT template that dose that, but was wondering if their is a tool that do that. Any ideas or thoughts. The only tables that i would keep in SQL Azure would anything related to transactions like payments. Appreciate your thoughts and advice

    Read the article

  • Delphi-5 single-file storage solution?

    - by pastacool
    Hi! Is there a Delphi-5 solution to easily integrate single-file storage into existing code? I would like to have files like Java *.jar or Openoffice document files which are zipped/compressed files and folders but with their own file extension. Edit: I know some ZIP capable components but in a nutshell I want to access files within the "container" and use normal file handling routines on them (eg. TStringList.SaveToFile). Any overhead about compress/uncompress should be handled by the component.

    Read the article

  • Looking for a Magnetic Card Reader with data storage

    - by Omar Sharif
    I am looking for a magnetic card reader with data storage of about 2 GB. This reader be placed in open under a shade, but would be exposed to temperatures from -5 C to 50 C. Job is to swipe customer loyalty cards issued to regular customers of a gas station. Each time they get gas filled, they will swipe their card, to mark their presence. Swiped data be stored in the reader. And on intervals be transferred to a PC lying in the office. The customer visits data be used to award some gifts or benefits to frequently visiting clients. Any ready-made solutions available ? Please advise. Omar

    Read the article

  • cross-platform frameworks for storage + metadata?

    - by Jason S
    I don't quite know what to use for terminology, so bear with me... Are there any cross-platform frameworks out there that facilitate a kind of "virtual file storage" to encapsulate adding files along with a database of metadata? I'm thinking about something along the lines of iTunes or iPhoto, where the program manages a whole bunch of files (in those cases audio or image files) and has a database of metadata so you can organize/find those files easily. I'd like to cobble together something along those lines for files in general. edit: I am hesitant to store files in a database alone, e.g. MySQL, as there would be potentially tens of gigabytes in my application (this issue has been mentioned in several SO posts, see this one that gives several links to others). I'm looking at CouchDB though and maybe it has promise....

    Read the article

  • Fetch videos from sony handycam to linux

    - by bstpierre
    I've got a Sony Handycam DCR-DVD101. When I plug connect the USB cable to my laptop (Ubuntu 10) it doesn't mount any storage device. If I run usb-devices, I see: T: Bus=02 Lev=02 Prnt=02 Port=00 Cnt=01 Dev#= 6 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=054c ProdID=00c1 Rev=01.00 S: Manufacturer=SONY S: Product=Storage Device C: #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=2mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=05 Prot=50 Driver=usb-storage The driver says usb-storage, but I'm not sure how to get the device mounted. Is there a way to make this work? Update: checking dmesg, I see: [259072.576559] usb 2-1.1: new high speed USB device using ehci_hcd and address 6 [259072.687200] usb 2-1.1: configuration #1 chosen from 1 choice [259072.836188] Initializing USB Mass Storage driver... [259072.836476] scsi5 : SCSI emulation for USB Mass Storage devices [259072.836632] usb-storage: device found at 6 [259072.836636] usb-storage: waiting for device to settle before scanning [259072.836660] usbcore: registered new interface driver usb-storage [259072.836666] USB Mass Storage support registered. [259077.830410] usb-storage: device scan complete [259077.832343] scsi 5:0:0:0: CD-ROM SONY DDX-A1010 R1.0 PQ: 0 ANSI: 0 [259077.888167] sr1: scsi3-mmc drive: 0x/0x pop-up [259077.888446] sr 5:0:0:0: Attached scsi CD-ROM sr1 [259077.888593] sr 5:0:0:0: Attached scsi generic sg2 type 5 [259080.002079] sr 5:0:0:0: [sr1] Unhandled sense code [259080.002085] sr 5:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [259080.002091] sr 5:0:0:0: [sr1] Sense Key : Blank Check [current] [259080.002097] sr 5:0:0:0: [sr1] Add. Sense: No additional sense information [259080.002104] sr 5:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 00 00 [259080.002117] end_request: I/O error, dev sr1, sector 0 [259080.002123] Buffer I/O error on device sr1, logical block 0 [259080.002128] Buffer I/O error on device sr1, logical block 1 Those I/O errors don't look good, is there any hope?

    Read the article

  • Online file storage similar to Amazon S3

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe RESTful but extras like mkdir and cp (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Why is USB-sticks so much slower than Solid State Drives?

    - by Jonas
    From what I understand, USB flash memory and Solid State Drives are based on similar technologies, NAND flash memory. But USB-sticks is usually quite slow with a read and write speed of 5-10MB per second while Solid State Drives usually is very fast, usually 100-570MB per second. Why are Solid State Drives so much faster than USB-sticks? And why isn't USB-sticks faster than 5-10MB per second? Is it simply that SSD-drives uses parallel access to the NAND flash memory or are there other reasons?

    Read the article

  • Online FTP or file sharing service [on hold]

    - by Frede
    We need to share large files with clients, e.g. clients upload a large file, we modify it and later make it available for download. Up until now we've used FTP but this has a number of drawbacks. A lot of management of files and setting up accounts etc. We are therefore considering online alternatives. Requirements: Cheap, 8-) Easy to use, ideally just requiring a web browser, but also possible for power users to connect e.g. via FTPS/SFTP No registration requried for users to upload/download files. We ourselves of course need to be able to login an view uploaded files and upload new files. No per user fee High bandwidth. As files may be GBs in size both upload and download speed cannot be too slow Secure. Encryption during upload/download. No way for users to access uploaded files. Once a user has uploaded a file they (or anyone else besides us) should be able to access the file. To download files users get a link with a password. Ideally the link expires after a set time. No software installation We do NOT need any sync features, backup, versioning etc. Just a quick, easy, secure way for us to share files with our clients. Services like JustCloud, DriveHQ etc seems bloated and "too much" for what we need. What other alternatives exist? Thanks!

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >