Search Results

Search found 5416 results on 217 pages for 'storage'.

Page 18/217 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Are there any USB flash drives or SD cards which use RAID or redundant storage for additional reliability?

    - by Luke Dennis
    I'm looking to get a fault-tolerant USB flash drive, which saves data to multiple independent locations, whether using RAID or some other means to back up data. Has a product like this ever been created, or are my only options to hack something together? (By the way: I'm aware that RAID doesn't prevent data corruption from software or the file system. I'm just looking for something that can handle one of the memory sticks going dead.)

    Read the article

  • Using Google Docs as a Storage System like S3. Is there any limit?

    - by mickthomp
    Hi all, I'm considering to upgrade to a Google Docs Premium Account (gDrive)? I'm wondering if that can be used as I'm using Amazon S3 at the moment. I'd like to upload images. Do you know if there is there any limit on the number of images I can upload on my 200GB Google Docs account? I think it could be really useful to have something like that and we could save some money on our webapps. Thank you ;)

    Read the article

  • Need a solution to store images (1 billion, 1000,000,000) which users will upload to a website via php or javascript upload [on hold]

    - by wish_you_all_peace
    I need a solution to store images (1 billion) which users will upload to a website via PHP or Javascript upload (website will have 1 billion page views a month using Linux Debian distros) assuming 20 photos per user maximum (10 thumbnails of size 90px by 90px and 10 large, script resized images of having maximum width 500px or maximum height 500px depending on shape of image, meaning square, rectangle, horizontal, vertical etc). Assume this to be a LEMP-stack (Linux Nginx MySQL PHP) social-media or social-matchmaking type application whose content will be text and images. Since everyone knows storing tons of images (website users uploaded images in this case) are bad inside a single directory or NFS etc, please explain all the details about the architecture and configuration of the entire setup of storage solution, to store 1 billion images on any method you recommend (no third-party cloud storage like S3 etc. It has to be within the private data center using our own hardware and resources.). The solution has to include both the storage solution and organizing the images uploaded by users. How will we organize the users images if a single user will not have more than 20 images (10 thumbs and 10 large of having either width or height 500px)? Please consider that this has to be organized in a structural way so we can fetch a single user's images via PHP/Javascript or API programmatically through some type of user's unique identifier(s).

    Read the article

  • STOP PRESS: FY15 Q1 Oracle ZS3 Contest for Partners

    - by Cinzia Mascanzoni
    04 JUNE 2014 Oracle EMEA Partners Stop Press Stay Connected Oracle Media Network   OPN on PartnerCast   STOP PRESS: FY15 Q1 Oracle ZS3 Contest for PartnersShare an unforgettable experience at the Teatro Alla Scala in Milan Dear valued Partner, We are pleased to launch a partner contest exclusive to our partners dedicated to promoting and selling Oracle Systems! You are essential to the success of Oracle and we want to recognize your contribution and effort in driving Oracle Storage to the market. To show our appreciation we are delighted to announce a contest, giving the winners the opportunity to attend a roundtable chaired by Senior Oracle Executives and spend an unforgettable evening at the magnificent Teatro Alla Scala in Milan, followed by a stay at the Grand Hotel et de Milan, courtesy of Oracle. Recognition will be given to 12 partner companies (10 VARs & 2 VADs) who will be recognized for their ZFS storage booking achievement in the broad market between June 1st and July 18th 2014. Criteria of Eligibility A minimum deal value of $30k is required for qualification Partners who are wholly or partially owned by a public sector organization are not eligible for participation Winners The winning VARs will be: The highest ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region (1) The highest Oracle on Oracle (2) ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region The winning VADs (3) will be: The highest ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA The highest Oracle on Oracle (2) ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA (1) Two VAR winners for each EMEA region – Eastern Europe & CIS, Middle East & Africa, South Europe, North Europe, UK/Ireland & Israel - as per the criteria outlined above(2) Oracle on Oracle, in this instance, means ZS3 or ZBA storage attached to DB or DB options, Engineered Systems or Sparc servers sold to the same customer by the same partner within the contest timelines.(3) Two VAD winners, one for each of the criteria outlined above, will be selected from across EMEA. Oracle shall be the final arbiter in selecting the winners. All winners will be notified via their Oracle account manager. Full details about the contest, expenses covered by Oracle and timetable of events can be found on the Oracle EMEA Hardware (Servers & Storage) Partner Community workspace (FY15 Q1 ZFS Partner Contest). Access to the community workspace requires membership. If you are not a member please register here. The Prize Winners will be invited to participate to a roundtable chaired by Oracle on Monday September 8th 2014 in Milan and to be guests of Oracle in the evening of September 8th, 2014 at the Teatro Alla Scala. The evening will comprise of a private tour of the Scala museum, cocktail reception at the elegant museum rooms and attending the performance by the renowned Soprano, Maria Agresta. Our guests will then retire for the evening to the Grand Hotel et de Milan, courtesy of Oracle. Good Luck!! For more information, please contact Sasan Moaveni. Regards, Olivier TordoSenior Director - Systems Business DevelopmentOracle EMEA Alliances & Channels Resources EMEA Hardware Partner Community EMEA Oracle Partner Days Find Partner Events EMEA Partner News Blog EMEA Partner Enablement Blog Oracle PartnerNetwork Copyright © 2014, Oracle and/or its affiliates.All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Hyper-V 2012 and P2000 SAS SAN

    - by user155950
    Hi I am having major problems setting up a Hyper-V 2012 cluster on a P2000 SAS SAN. Running System Center VMM 2012 SP1 I am unable to see any storage to create my cluster. Has anyone had experienced anything similar? Under fabric and storage I can't add the P2000, all I can do is use storage spaces in server manager to create a storage pool and virtual disk. This allows me to create a file share which I can add to VMM but I still can't see any disk to create a cluster. I am just about at the point where I want to tear my hair out wipe the servers and stick VMware on them because I know it works as I have set several systems up like this in the past. The Hyper-V servers can see the storage and in server manager on my management machine it seems to know both servers can see the same disk. VMM is running on the same machine and it can't see any disk. Help..... Thanks Mike

    Read the article

  • Oracle ZS3 Contest for Partners: Share an unforgettable experience at the Teatro Alla Scala in Milan

    - by Claudia Caramelli-Oracle
    12.00 Dear valued Partner, We are pleased to launch a partner contest exclusive to our partners dedicated to promoting and selling Oracle Systems! You are essential to the success of Oracle and we want to recognize your contribution and effort in driving Oracle Storage to the market. To show our appreciation we are delighted to announce a contest, giving the winners the opportunity to attend a roundtable chaired by Senior Oracle Executives and spend an unforgettable evening at the magnificent Teatro Alla Scala in Milan, followed by a stay at the Grand Hotel et de Milan, courtesy of Oracle. Recognition will be given to 12 partner companies (10 VARs & 2 VADs) who will be recognized for their ZFS storage booking achievement in the broad market between June 1st and July 18th 2014. Criteria of Eligibility A minimum deal value of $30k is required for qualification Partners who are wholly or partially owned by a public sector organization are not eligible for participation  Winners The winning VARs will be: The highest ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region (1) The highest Oracle on Oracle (2) ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region The winning VADs (3) will be: The highest ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA The highest Oracle on Oracle (2) ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA  The Prize Winners will be invited to participate to a roundtable chaired by Oracle on Monday September 8th 2014 in Milan and to be guests of Oracle in the evening of September 8th, 2014 at the Teatro Alla Scala. The evening will comprise of a private tour of the Scala museum, cocktail reception at the elegant museum rooms and attending the performance by the renowned Soprano, Maria Agresta. Our guests will then retire for the evening to the Grand Hotel et de Milan, courtesy of Oracle. Oracle shall be the final arbiter in selecting the winners and all winners will be notified via their Oracle account manager.Full details about the contest, expenses covered by Oracle and timetable of events can be found on the Oracle EMEA Hardware (Servers & Storage) Partner Community workspace (FY15 Q1 ZFS Partner Contest). Remember: access to the community workspace requires membership. If you are not a member please register here. Good Luck!! For more information, please contact Sasan Moaveni. (1) Two VAR winners for each EMEA region – Eastern Europe & CIS, Middle East & Africa, South Europe, North Europe, UK/Ireland & Israel - as per the criteria outlined above (2) Oracle on Oracle, in this instance, means ZS3 or ZBA storage attached to DB or DB options, Engineered Systems or Sparc servers sold to the same customer by the same partner within the contest timelines.(3) Two VAD winners, one for each of the criteria outlined above, will be selected from across EMEA. Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Serve up PC hard drive as USB mass storage

    - by sheepsimulator
    Is there a software package available that can serve up a hard-drive internal to a PC and make it available over USB to other USB Master nodes as mass storage? Ex: take your C: or /dev/hda drive on a PC (let's call the computer PC-A), and run a driver program which makes your C: or /dev/hda drive available to external devices as USB mass storage. When you'd hook up another PC (PC-B) to PC-A via USB, it would detect a USB mass storage device, which is C: or /dev/hda on PC-A. Is this even possible? EDIT: I know that there are other ways of making data on a drive available between two different computers (eg. putting PC-A's hdd in a USB-drive-enclosure, or having PC-A make the hdd available via a network share). But I'd like to know if the method that I describe above is even technically possible.

    Read the article

  • Amazon S3 - Storage Class and Server Side Encryption

    - by Steven
    Ahhh! I am using Amazon S3 for some low price storage to clear down out SAN. I created the bucket and created a root folder. I set the storage class to standard and server side encryption AES. I started a copy job to move the files, some files copied over and i checked the files: Reduced Redundancy Encryption set to none WTF? So i deleted all files and folders. I manuallyed created the folder structure and again set the storage class and encryption level. I coped some files and bamm, still showing (at a file level as Reduced and no encryption). So my question is this, is it really raid'd and encrypted just not showing it properly (as the root folder is, how can the file not be??) or (b) am i being a huge tool and missing something?

    Read the article

  • Dropbox’s Great Space Race Delivers Additional Space to Students

    - by Jason Fitzpatrick
    If you’re a student or faculty member (or still have an active .edu email account) now’s the time to cash in on some free cloud storage courtesy of Dropbox’s Great Space Race. Just by linking your .edu address with your Dropbox account you’ll get an extra 3GB of storage for the next two years. The more people from your school that sign up, the higher the total climbs–up to an extra 25GB for two years. The Space Race lasts for the next eight weeks, you can read more about the details here or just jump to the signup page at the link below. The Great Space Race [Dropbox] Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference

    Read the article

  • Recommendations for distributed processing/distributed storage systems

    - by Eddie
    At my organization we have a processing and storage system spread across two dozen linux machines that handles over a petabyte of data. The system right now is very ad-hoc; processing automation and data management is handled by a collection of large perl programs on independent machines. I am looking at distributed processing and storage systems to make it easier to maintain, evenly distribute load and data with replication, and grow in disk space and compute power. The system needs to be able to handle millions of files, varying in size between 50 megabytes to 50 gigabytes. Once created, the files will not be appended to, only replaced completely if need be. The files need to be accessible via HTTP for customer download. Right now, processing is automated by perl scripts (that I have complete control over) which call a series of other programs (that I don't have control over because they are closed source) that essentially transforms one data set into another. No data mining happening here. Here is a quick list of things I am looking for: Reliability: These data must be accessible over HTTP about 99% of the time so I need something that does data replication across the cluster. Scalability: I want to be able to add more processing power and storage easily and rebalance the data on across the cluster. Distributed processing: Easy and automatic job scheduling and load balancing that fits with processing workflow I briefly described above. Data location awareness: Not strictly required but desirable. Since data and processing will be on the same set of nodes I would like the job scheduler to schedule jobs on or close to the node that the data is actually on to cut down on network traffic. Here is what I've looked at so far: Storage Management: GlusterFS: Looks really nice and easy to use but doesn't seem to have a way to figure out what node(s) a file actually resides on to supply as a hint to the job scheduler. GPFS: Seems like the gold standard of clustered filesystems. Meets most of my requirements except, like glusterfs, data location awareness. Ceph: Seems way to immature right now. Distributed processing: Sun Grid Engine: I have a lot of experience with this and it's relatively easy to use (once it is configured properly that is). But Oracle got its icy grip around it and it no longer seems very desirable. Both: Hadoop/HDFS: At first glance it looked like hadoop was perfect for my situation. Distributed storage and job scheduling and it was the only thing I found that would give me the data location awareness that I wanted. But I don't like the namename being a single point of failure. Also, I'm not really sure if the MapReduce paradigm fits the type of processing workflow that I have. It seems like you need to write all your software specifically for MapReduce instead of just using Hadoop as a generic job scheduler. OpenStack: I've done some reading on this but I'm having trouble deciding if it fits well with my problem or not. Does anyone have opinions or recommendations for technologies that would fit my problem well? Any suggestions or advise would be greatly appreciated. Thanks!

    Read the article

  • Using IsolatedStorage on a IIS server

    - by JoeBilly
    I'am a bit confusing about the use of Isolated Storage on an IIS server. I understand the goal of Isolated Storage : provides a safe place to store data with no worry about how and where is this place. Since Isolated Storage has a by-user and by-assembly approach, I'am not to wild about using it on a IIS server where applications have almost their own identity. I haven't really seen the interest of impersonating a web application and almost never seen impersonated web applications myself but this is my point of view. Using Isolated Storage on a server mean : Using Isolated stores in \Documents and Settings\<user>\ Which mean \Documents and Settings\Default User\ when the application pool is owned by Local System or Network Services I guess Which also mean Write rights on this folder for Local System or Network Services Using of impersonation Regarding a web application (logic), these ideas are confusing me... Document and Settings ? Default User ? Enable impersonation just for storage ? No control about storage on server ? Uh ? And then I'am a front of a dilema : use System.IO.Packaging (with Isolated Storage inside) on web applications or find an alternative ? Am I wrong in my approach ? Did I miss something ? Any point of view is appreciated and an explanation about the Isolated Storage with IIS philosophy could be an anwser. Thanks !

    Read the article

  • Win'08 - Extend volume size on SAN attached storage in a failover cluster

    - by user53207
    Running Win 2008, I'd like to extend the volume of a SAN attached drive that is part of a failover cluster. The SAN team has allocated additional drive space which is being seen by Windows Storage Manager. However, the option to "Extend Volume" is disabled, so is the ability to turn it into a dynamic disk. Is the ability to extend volumes when part of a failover cluster disabled or not available when it's part of SAN attached storage?

    Read the article

  • drbd block device as storage for kvm virtual machine

    - by facha
    Hi, everyone I've setup a drbd replication between two machines and used a drbd block device as storage for a kvm machine. Everything is running well. However I'm in doubt if this setup is ok to use. From what I've read so far on the internet, people tend to use drbd-ospf-qcow2_file as storage for their virtual machines.

    Read the article

  • DMZ and LAN on the same Windows Storage Server 2008 R2

    - by Sergei
    We are moving from EMC Celerra NS20 to Windows Storage Server 2008 R2. I would like to use deduplication feature in WSS (Single Instance Storage) for hosting data for our external FTP server.The idea is to use NFS server on WSS as datastore for Linux FTP server located in DMZ and CIFS services for servers in LAN. Using Celerra fileserver I was able to create multiple instances of fileservers with multiple virtual interfaces and separate filesystems so all data and networks would be separated. Would it be possible to do something similiar on WSS?

    Read the article

  • DMZ and LAN on the same Windows Storage Server 2008 R2

    - by Sergei
    We are moving from EMC Celerra NS20 to Windows Storage Server 2008 R2. I would like to use deduplication feature in WSS (Single Instance Storage) for hosting data for our external FTP server.The idea is to use NFS server on WSS as datastore for Linux FTP server located in DMZ and CIFS services for servers in LAN. Using Celerra fileserver I was able to create multiple instances of fileservers with multiple virtual interfaces and separate filesystems so all data and networks would be separated. Would it be possible to do something similiar on WSS?

    Read the article

  • USB stick appearing as hard disk drive, not removable storage device

    - by Paul Lammertsma
    I just plugged in a very simple 1GB USB stick from the office in hopes of making it a Fedora Live USB stick. For that to work, I need a removable storage device, or else it won't appear in LiveUSB Creator's list. Explorer lists my USB stick as a hard disk: LiveUSB Creator indeed doesn't show it in the device list: Is there any way of forcing Windows to see the stick as a removable storage device?

    Read the article

  • Personal cloud storage options

    - by rhaddan
    I'm looking for some personal cloud storage options. My biggest concern about moving to a hosted storage solution is the long-term viability of the provider. Has anyone used a cloud service that you're crazy about? I'm a Mac user, so I need to have something that will work on the Mac OS and ideally the iPhone as well.

    Read the article

  • Shared storage setup for Windows

    - by KarmaVille
    This is a n00b question. I want to setup a SAN that will be used as shared storage between multiple Windows 2008 R2 servers. By shared storage, I mean the files can be seen by all servers. How do I do that? Is it possible to implement this without a dedicated Windows file server? (I don't want replication). I'm doing this so that I can setup: http://activemq.apache.org/shared-file-system-master-slave.html

    Read the article

  • choosing filesystem for unified storage

    - by maruti
    which file system can serve storage to Windows and VMware ESX clients? planning a storage server box ~ 10TB using NexentaStor or FreeNAS. this has to serve Primarily Windows 2003, 2008 servers and occasionally VMware ESX. is that possible? please correct me if wrong.thx

    Read the article

  • optimizing a windows server 2003 storage capacity

    - by Hosni
    I have got a windows server 2003 with partitioned Hard drive 10Go and 80Go, and i want to improve the storage capacity as the little partition 10Go is almost full. So i have got choice between partition the hard drive to equal parts, or set up a new hard drive with better storage capacity.knowing that the server has to be on service as soon as possible. Which one may be the better solution that takes less time and less risks?

    Read the article

  • Storage servers architectural solution for backup. What is the best way? (pics inside)

    - by Kirzilla
    Hello, What is the best architecture for storage servers array? Needs... a) easy way to add one more server to array b) we don't have single backup server c) we need to have one backup for each "web" part of each server Group #1 : is cross-server-backuping scheme; the main disadvantage that we can't add one more server, we should add 2 servers in one time. Group #2 : is a Group #1, but with three and more servers. It also have a disadvantage - to add one more server we should move existing backup to it. Any suggestions? Thank you. Thank you.

    Read the article

  • How to push oath token to LocalStorage or LocalSession and listen to the Storage Event? (SoundCloud Php/JS bug workaround)

    - by afxjzs
    This references this issue: Javascript SDK connect() function not working in chrome I asked for more information on how to resolve with localstorage and was asked to create a new topic. The answer was "A workaround is instead of using window.opener, push the oauth token into LocalStorage or SessionStorage and have the opener window listen to the Storage event." but i have no idea how to do that. It seems really simple, but i don't know where to start. I couldn't find an relevant examples. thanks for your help!

    Read the article

  • How to config Remote BLOB Storage(RBS) with Microsoft Dynamics CRM 4.0 ?

    - by jk
    Hi We have working site for Dynamic crm 4.0 and in it we are storing image into the database. Now database is growing very fast and server is dying.. now I want to enable the Remote BLOB Storage with Dynamic CRM 4.0. for that I tried to install RBS for testing but everywhere is configure with Sharepoint 2010 not with Dynamic Crm. Does anybody know how to install and configure with Dyanmic CRM 4.0? Does RBS with Standard Edition of SQL Server 2008? I followed following path to install but it with Sharepoint? http://technet.microsoft.com/en-us/library/ee663474.aspx Any help is appreciate. Thanks

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >