Search Results

Search found 7222 results on 289 pages for 'storage cells'.

Page 12/289 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Cloud storage provider lost my data. How to back up next time?

    - by tomcam
    What do you do when cloud storage fails you? First, some background. A popular cloud storage provider (rhymes with Booger Link) damaged a bunch of my data. Getting it back was an uphill battle with all the usual accusations that it was my fault, etc. Finally I got the data back. Yes, I can back this up with evidence. Idiotically, I stayed with them, so I totally get that the rest of this is on me. The problem had been with a shared folder that works with all 12 computers my business and family use with the service. We'll call that folder the Tragic Briefcase. It is a sort of global folder that's publicly visible to all computers on the service. It's our main repository. Today I decided to deal with some residual effects of the Crash of '11. Part of the damage they did was that in just one of my computers (my primary, of course) all the documents in the Tragic Briefcase were duplicated in the Windows My Documents folder. I finally started deleting them. But guess what. Though they appeared to be duplicated in the file system, removing them from My Documents on the primary PC caused them to disappear from the Tragic Briefcase too. They efficiently disappeared from all the other computers' Tragic Briefcases as well. So now, 21 gigs of files are gone, and of course I don't know which ones. I want to avoid this in the future. Apart from using a different storage provider, the bigger picture is this: how do I back up my cloud data? A complete backup every week or so from web to local storage would cause me to exceed my ISP's bandwidth. Do I need to back up each of my 12 PCs locally? I do use Backupify for my primary Google Docs, but I have been storing taxes, confidential documents, Photoshop source, video source files, and so on using the web service. So it's a lot of data, but I need to keep it safe. Backup locally would also mean 2 backup drives or some kind of RAID per PC, right, because you can't trust a single point of failure? Assuming I move to DropBox or something of its ilk, what is the best way to make sure that if the next cloud storage provider messes up I can restore?

    Read the article

  • Can't get lines around table borders/cells [migrated]

    - by Ira Baxter
    I have several web pages containing tables, for which I'd like to have line-borders around the tables and the cells. In fact, some of these pages existed for several years already, and rendered acceptly in IE6, IE7. We switched about 6 months ago to a completely different set of style sheets to change our site look and feel. We also switched to "modern" browsers such as IE8 (and because I couldn't stop Vista) to IE9. Now the borders don't render at all. I spent a day fighting with this about a month ago, and failed to fix it. It seemed that I could reduce the page down to just the barest table and IE8 would still not render the border. I think I decided IE8 was just buggy, but I'm not an HTML expert so it is more likely that I'm buggy. (I'm just getting back to this; I'll go see if I can find that reduced page). Here is one such page: http://www.semdesigns.com/products/DMS/DMSComparison.html The tables should be obvious; you can tell them by their absence of lines :-{ The URI validates using the W3C service as HTML 4.01 Transitional. Any suggestions?

    Read the article

  • Best usb storage for my router, Asus RT-AC66U?

    - by Jason94
    I have the ASUS RT-AC66U and I want to add a USB storage to it. It has 2x USB, and Im already using one for my printer. So the last one I want to use to attach a USB storage, and I've read some reviews stating the throughput of the USB could be up to 18 mb/s. So in regard of USB storage, should I care about hard disk cache? Simple powered-over-usb seems to have 8 mb cache, other (externally powered) has 16 for instance.

    Read the article

  • How to synchronize HTML5 local/webStorage and server-side storage?

    - by thSoft
    I'm currently seeking solutions for transparently and automatically synchronizing and replicating across the client-side HTML5 localStorage or web storage and (maybe multiple) server-side storage(s) (the only requirement here that it should be simple and affordable to install on a regular hosting service). So do you have any experience with such libraries/technologies that offer data storage which automate the client-server storage synchronization and allow data to be available either offline or online or both? I think this is a fairly common scenario of web applications supporting offline mode...

    Read the article

  • Chrome Apps Office Hours: Chrome Storage APIs

    Chrome Apps Office Hours: Chrome Storage APIs Ask and vote for questions: goo.gl You spoke, we listened. Join Paul Kinlan, Paul Lewis, Pete LePage, and Renato Dias to learn about the new storage APIs that are available to Chrome Packaged Apps in the next installment of Chrome Apps Office Hours. We'll take a look at the new sync-able and local storage APIs as well as other ways you can save data locally on your users machine. We didn't get through quite as many questions as we hoped last week, and are going to dedicate some extra time this week, so be sure to post your questions on Moderator below! From: GoogleDevelopers Views: 0 9 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Google Storage for Developers…

    - by joelvarty
    I noticed this today and it seems to be a service that will compete with Amazon S3 and Microsoft’s Azure Blob storage. It’s only open to US developers for now, but I have one burning question: can we transfer directly from Google Storage to another Google service (like YouTube, Docs, etc) without incurring any transfer charges?  The even bigger question is whether all of the APIs will be updated to include this new service and to better amalgamate the existing app services with this one, since storage is so central to everything, it seems to beg the question. via Daring Fireball more later - joel

    Read the article

  • Learn how Oracle storage efficiencies can help your budget

    - by jenny.gelhausen
    Mark Your Calendar! Live Webcast: Next Generation Storage Management Solutions Wednesday, March 24th, 2010 at 9:00am PT or your local time Please plan to join us for this webcast where Forrester senior analyst Andrew Reichman will discuss the pillars of storage efficiency, how to measure and improve it, and how this can help your business immediately alleviate budget pressures. Joining Mr. Reichman are Phil Stephenson, Senior Principal Product Manager at Oracle, and Matthew Baier, Oracle Product Director, who will explain to you the next generation storage capabilities available in Oracle Database 11g and Oracle Exadata. Register for this March 24th live wecast today! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Chrome Apps Office Hours: Storage API Deep Dive

    Chrome Apps Office Hours: Storage API Deep Dive Ask and vote for questions at: goo.gl Join us next week as we take a deeper dive into the new storage APIs available to Chrome Packaged Apps. We've invited Eric Bidelman, author of the HTML5 File System API book to join Paul Kinlan, Paul Lewis, Pete LePage and Renato Dias for our weekly Chrome Apps Office Hours in which we will pick apart some of the sample Chrome Apps and explain how we've used the storage APIs and why we made the decisions we did. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Oracle ZFSSA Hybrid Storage Pool Demo

    - by Darius Zanganeh
    The ZFS Hybrid Storage Pool (HSP) has been around since the ZFSSA first launched.  It is one of the main contributors to the high performance we see on the Oracle ZFSSA both in benchmarks as well as many production environments.  Below is a short video I made to show at a high level just how impactful this HSP pool is on storage performance.  We squeeze a ton of performance out of our drives with our unique use of cache, write optimized ssd and read optimized ssd.  Many have written and blogged about this technology, here it is in action. Demo of the Oracle ZFSSA Hybrid Storage Pool and how it speeds up workloads.

    Read the article

  • Allow paste in worksheet without overwriting locked cells

    - by jjeaton
    I have a protected worksheet that users would like to copy and paste into. I have no control over the workbook they are copying from. The protected worksheet has some rows that are available for data entry, and other rows that are locked and greyed out to the user. The users would like to be able to paste over the top of the entire worksheet from another random workbook and have all the cells available for data entry filled in, while the locked cells are undisturbed. In the current state, the user gets an error when they try to paste, because it cannot paste over the locked cells. Example: Worksheet 1: Act1 100 100 100 Act2 100 100 100 Act3 100 100 100 Worksheet 2: (The second row is locked) Act1 300 300 300 Act2 200 200 200 Act3 100 100 100 After copying/pasting Worksheet 2 should look like this: Act1 100 100 100 Act2 200 200 200 Act3 100 100 100 The values from worksheet 1 are populated and the locked rows are undisturbed. I've been thinking along the lines of having a hook where on paste, the locked cells are unlocked so that the paste can happen, and then are reverted to their original values and relocked. Is there some way I can loop through the cells in the clipboard and only paste cells where the target isn't locked? It is preferable to not create a separate button for paste, so there is less impact on the users, but if that's the only way, I'm not opposed to it. Currently, I plan on grouping the locked rows together, so that the data entry cells are contiguous, but then the accounts will be out of order, which is not preferred.

    Read the article

  • Cheapest way to connect 20-24 Sata II HDDs in a budget storage server?

    - by Joe Hopfgartner
    I need to assemble a high density storage server for as cheap as possible. It's been a while for me and the last systems I integrated didn't even have Sata yet... During my Research I of course stumbled about Nexsan SATA Beast, the BackBlaze storage Pods as well as some ridiculously overpriced HP Proliant or Dell storage solutions. Finally I choose Norco cases as the way to go. My eye is set on the RPC-4020, which is a 4U 19" Rackmount case with 20 Hot Swap 3.5" SATA/SAS Hdd trays (Backplanes included) and room for two 2.5" OS drives as well as a Slim Line CD-Rom. The backplanes connect with a single SATA port for each drive, so there are 20 internal SATA ports to to be connected. They also have redundant power ports which I think is quite nice. The cheapest price I have found is 290$ + 40$ shipping. In europe the cheapest unfortunately is 370€ (500$) + 40 € shipping... A nice alternative would be the RPC-4224 which has SFF-8087 Mini SAS connectors that bundle 4 SATA trays each. But it doesn't seem to be available in Europe (where i am) anywhere. So here comes my problem: What Mainboard/Controller to choose to connect them for as cheap as possible while still having nice data rates? I have to say that the server is intended as a Storage server with 1gps connectivity and the data transfer will be distributed very evenly across all drives. I also don't require any raid functionality. This is all done at application level, I just need JBOD. So for example if I go for the RPC 4020 Model I need to connect 20 Storage + 1 OS + 1 CDROM Sata ports. I searched a bit and stumbled across this very low priced controller: http://www.intel.com/products/server/raid-controllers/SASWT4I/SASWT4I-overview.htm They sell it for 115 € here and the specs say it can control up to 122 hard discs and has 4 Mini SAS connectors. So I would use 4 Mini SAS 36pin - 4 SATA 7pin cables to connect 4 SATA drives to each port and choose a Mainboard taht has 6 SATA on board (for example this one) and hurray, I can connect my 22 SATA devices for as low as about ~ 220 EUR (cpu, ram, psu, case not counted) Question: WOULD THAT WORK? And if not, why? 2nd Question: If I go for the 4220 or 4224 Model, I have internal Mini SAS connectors. Am I right in assuming that the backplane than acts as a "SAS Expander"? And can I just plug these SAS connectors into any SAS port I can find on my controller / mainboard or are there certain requirements? I know that SATA port multipliers only work with controllers that are ready for that. But isn't this expansion already implemented in the SAS standard? I am sorry that this is a very broad question, but I really spent the last week reading up and it seems to be not so clear! Especially all the controlling hardware specifications! 3rd Question: A lot of hardware specs feature "internal channels" and "internal connectors". The connecors are the physical numbers of places where I can plug a cable in. I got that. But are the "internal channels" always the maximum numbers of physical drives that can be used in the end? Or can I enhance this further by Expanders/Fanouts? 4th and last question: What do you think about the setup so far? Do you know any good alternatives? Maby I am completely going the wrong way and some DAS would be way better? Are there any comparable chassis available in europe? Please feel free to say whatever you think is relevant to the subject!

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • Help with Perl persistent data storage using Data::Dumper

    - by stephenmm
    I have been trying to figure this out for way to long tonight. I have googled it to death and none of the examples or my hacks of the examples are getting it done. It seems like this should be pretty easy but I just cannot get it. Here is the code: #!/usr/bin/perl -w use strict; use Data::Dumper; my $complex_variable = {}; my $MEMORY = "$ENV{HOME}/data/memory-file"; $complex_variable->{ 'key' } = 'value'; $complex_variable->{ 'key1' } = 'value1'; $complex_variable->{ 'key2' } = 'value2'; $complex_variable->{ 'key3' } = 'value3'; print Dumper($complex_variable)."TEST001\n"; open M, ">$MEMORY" or die; print M Data::Dumper->Dump([$complex_variable], ['$complex_variable']); close M; $complex_variable = {}; print Dumper($complex_variable)."TEST002\n"; # Then later to restore the value, it's simply: do $MEMORY; #eval $MEMORY; print Dumper($complex_variable)."TEST003\n"; And here is my output: $VAR1 = { 'key2' => 'value2', 'key1' => 'value1', 'key3' => 'value3', 'key' => 'value' }; TEST001 $VAR1 = {}; TEST002 $VAR1 = {}; TEST003 Everything that I read says that the TEST003 output should look identical to the TEST001 output which is exactly what I am trying to achieve. What am I missing here? Should I be "do"ing differently or should I be "eval"ing instead and if so how? Thanks for any help...

    Read the article

  • iPad App Cookies Storage?

    - by Aakburns
    Hi, I have an application that sends you to one website that shows a login form. I've read up on cookies from the apple reference (http://developer.apple.com/iphone/library/documentation/Cocoa/Reference/Foundation/Classes/NSHTTPCookie_Class/Reference/Reference.html#//apple_ref/occ/instm/NSHTTPCookie/initWithProperties:) I'm honestly just not understanding this at all. Can someone please explain how to get cookies working for an app? Post sample code? Thanks.

    Read the article

  • Linux's thread local storage implementation

    - by anon
    __thread Foo foo; How is "foo" actually resolved? Does the compiler silently replace every instance of "foo" with a function call? Is "foo" stored somewhere relative to the bottom of the stack, and the compiler stores this as "hey, for each thread, have this space near the bottom of the stack, and foo is stored as 'offset x from bottom of stack'"?

    Read the article

  • Looking for the most painless non-RDBMS storage method in C#

    - by NateD
    I'm writing a simple program that will run entirely client-side. (Desktop programming? do people still do that?) and I need a simple way to store trivial amounts of data in a structured form, but really don't see any need to use a database system. What's more, some of the data needs to be serialized and passed around to different users, like some kind of "file" or perhaps a "document". (has anyone ever done that before?) So, I've looked at using .Net DataSets, LINQ, direct XML manipulation, and they all seem like they would get the job done, but I would like to know before I dive into any of them if there's one method that is generally regarded as easier to code than others. As I said, the amount of data to be stored is trivial, even if one hundred people all used the same machine we're not talking about more than 10 MB, so performance is not as large a concern as is codeability/maintainability. Thank you all in advance!

    Read the article

  • Android App Widget: Data storage

    - by Jeffrey
    Hello everyone, I'm implementing a home screen app widget. I was wondering which is better to store/read data: SharedPreferences or a SQLite database? The data is accessed from an AppWidgetProvider (similar to a BroadcastReceiver), and any given instance of the widget displays different data based on appWidgetId. Is one way or the other frowned upon? Thanks for your time.

    Read the article

  • Android v1.5 w/ browser data storage

    - by Sirber
    I'm trying to build an offline web application which can sync online if the network is available. I tryed jQuery jStore but the test page stop at "testing..." whitout result, then I tryed Google Gears which is supposed to be working on the phone but it gears is not found. if (window.google && google.gears) { google.gears.factory.getPermission(); // Database var db = google.gears.factory.create('beta.database'); db.open('cominar-compteurs'); db.execute('create table if not exists Lectures' + ' (ID_COMPTEUR int, DATE_HEURE timestamp, kWh float, Wmax float, VAmax float, Wcum float, VAcum float);'); } else { alert('Google Gears non trouvé.'); } the code does work on Google Chrome v5.

    Read the article

  • Checking if a blob exists in Azure Storage

    - by John
    Hi, I've got a very simple question (I hope!) - I just want to find out if a blob (with a name I've defined) exists in a particular container. I'll be downloading it if it does exist, and if it doesn't then I'll do something else. I've done some searching on the intertubes and apparently there used to be a function called DoesExist or something similar... but as with so many of the Azure APIs, this no longer seems to be there (or if it is, has a very cleverly disguised name). Or maybe I'm missing something simple... :) John

    Read the article

  • Asp.net mvc small amount data storage

    - by Trimack
    Hi there, I am writing some learning tests (i.e. what's the answer for...; choose correct options...). Now my question is, how should I store them. SQL db seems quite an overkill, but I really don't know what would be the best choice if I wanted to select random subset of questions etc. Perhaps some simple xml files? Thanks for advice.

    Read the article

  • Efficient storage/retrieval method for replayable comet style applications (Google Wave, Etherpad)

    - by Gareth Simpson
    I am considering a web application that would have the same kind of multi user, automatic saving, infinite undo / replay capabilities that you see in Google Wave and Etherpad (albeit on a drastically smaller scale and userbase). Before I go away and reinvent the wheel, is this something that has already been addressed as either a piece of technology or library, or even just a design pattern. I know this isn't necessarily the best Stack Overflow question as there is probably not a "right" answer, but my Google-fu has failed me and I'd just like a reading list! Ordinarily I would be developing under python/django but this is not a firm requirement just a preference :)

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >