Search Results

Search found 5933 results on 238 pages for 'mass storage'.

Page 38/238 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered a world record TPC-H @3000GB benchmark result for systems with four processors. This result beats eight processor results from IBM (POWER7) and HP (x86). The SPARC T4-4 server also delivered better performance per core than these eight processor systems from IBM and HP. Comparisons below are based upon system to system comparisons, highlighting Oracle's complete software and hardware solution. This database world record result used Oracle's Sun Storage 2540-M2 arrays (rotating disk) connected to a SPARC T4-4 server running Oracle Solaris 11 and Oracle Database 11g Release 2 demonstrating the power of Oracle's integrated hardware and software solution. The SPARC T4-4 server based configuration achieved a TPC-H scale factor 3000 world record for four processor systems of 205,792 QphH@3000GB with price/performance of $4.10/QphH@3000GB. The SPARC T4-4 server with four SPARC T4 processors (total of 32 cores) is 7% faster than the IBM Power 780 server with eight POWER7 processors (total of 32 cores) on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 36% better in price performance compared to the IBM Power 780 server on the TPC-H @3000GB Benchmark. The SPARC T4-4 server is 29% faster than the IBM Power 780 for data loading. The SPARC T4-4 server is up to 3.4 times faster than the IBM Power 780 server for the Refresh Function. The SPARC T4-4 server with four SPARC T4 processors is 27% faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T4-4 server is 52% faster than the HP ProLiant DL980 G7 server for data loading. The SPARC T4-4 server is up to 3.2 times faster than the HP ProLiant DL980 G7 for the Refresh Function. The SPARC T4-4 server achieved a peak IO rate from the Oracle database of 17 GB/sec. This rate was independent of the storage used, as demonstrated by the TPC-H @3000TB benchmark which used twelve Sun Storage 2540-M2 arrays (rotating disk) and the TPC-H @1000TB benchmark which used four Sun Storage F5100 Flash Array devices (flash storage). [*] The SPARC T4-4 server showed linear scaling from TPC-H @1000GB to TPC-H @3000GB. This demonstrates that the SPARC T4-4 server can handle the increasingly larger databases required of DSS systems. [*] The SPARC T4-4 server benchmark results demonstrate a complete solution of building Decision Support Systems including data loading, business questions and refreshing data. Each phase usually has a time constraint and the SPARC T4-4 server shows superior performance during each phase. [*] The TPC believes that comparisons of results published with different scale factors are misleading and discourages such comparisons. Performance Landscape The table lists the leading TPC-H @3000GB results for non-clustered systems. TPC-H @3000GB, Non-Clustered Systems System Processor P/C/T – Memory Composite(QphH) $/perf($/QphH) Power(QppH) Throughput(QthH) Database Available SPARC Enterprise M9000 3.0 GHz SPARC64 VII+ 64/256/256 – 1024 GB 386,478.3 $18.19 316,835.8 471,428.6 Oracle 11g R2 09/22/11 SPARC T4-4 3.0 GHz SPARC T4 4/32/256 – 1024 GB 205,792.0 $4.10 190,325.1 222,515.9 Oracle 11g R2 05/31/12 SPARC Enterprise M9000 2.88 GHz SPARC64 VII 32/128/256 – 512 GB 198,907.5 $15.27 182,350.7 216,967.7 Oracle 11g R2 12/09/10 IBM Power 780 4.1 GHz POWER7 8/32/128 – 1024 GB 192,001.1 $6.37 210,368.4 175,237.4 Sybase 15.4 11/30/11 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64/128 – 512 GB 162,601.7 $2.68 185,297.7 142,685.6 SQL Server 2008 10/13/10 P/C/T = Processors, Cores, Threads QphH = the Composite Metric (bigger is better) $/QphH = the Price/Performance metric in USD (smaller is better) QppH = the Power Numerical Quantity QthH = the Throughput Numerical Quantity The following table lists data load times and refresh function times during the power run. TPC-H @3000GB, Non-Clustered Systems Database Load & Database Refresh System Processor Data Loading(h:m:s) T4Advan RF1(sec) T4Advan RF2(sec) T4Advan SPARC T4-4 3.0 GHz SPARC T4 04:08:29 1.0x 67.1 1.0x 39.5 1.0x IBM Power 780 4.1 GHz POWER7 05:51:50 1.5x 147.3 2.2x 133.2 3.4x HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 08:35:17 2.1x 173.0 2.6x 126.3 3.2x Data Loading = database load time RF1 = power test first refresh transaction RF2 = power test second refresh transaction T4 Advan = the ratio of time to T4 time Complete benchmark results found at the TPC benchmark website http://www.tpc.org. Configuration Summary and Results Hardware Configuration: SPARC T4-4 server 4 x SPARC T4 3.0 GHz processors (total of 32 cores, 128 threads) 1024 GB memory 8 x internal SAS (8 x 300 GB) disk drives External Storage: 12 x Sun Storage 2540-M2 array storage, each with 12 x 15K RPM 300 GB drives, 2 controllers, 2 GB cache Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Enterprise Edition Audited Results: Database Size: 3000 GB (Scale Factor 3000) TPC-H Composite: 205,792.0 QphH@3000GB Price/performance: $4.10/QphH@3000GB Available: 05/31/2012 Total 3 year Cost: $843,656 TPC-H Power: 190,325.1 TPC-H Throughput: 222,515.9 Database Load Time: 4:08:29 Benchmark Description The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC. TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system. The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor. Key Points and Best Practices Twelve Sun Storage 2540-M2 arrays were used for the benchmark. Each Sun Storage 2540-M2 array contains 12 15K RPM drives and is connected to a single dual port 8Gb FC HBA using 2 ports. Each Sun Storage 2540-M2 array showed 1.5 GB/sec for sequential read operations and showed linear scaling, achieving 18 GB/sec with twelve Sun Storage 2540-M2 arrays. These were stand alone IO tests. The peak IO rate measured from the Oracle database was 17 GB/sec. Oracle Solaris 11 11/11 required very little system tuning. Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric in which to compare systems. The SPARC T4-4 server and Oracle Solaris efficiently managed the system load of over one thousand Oracle Database parallel processes. Six Sun Storage 2540-M2 arrays were mirrored to another six Sun Storage 2540-M2 arrays on which all of the Oracle database files were placed. IO performance was high and balanced across all the arrays. The TPC-H Refresh Function (RF) simulates periodical refresh portion of Data Warehouse by adding new sales and deleting old sales data. Parallel DML (parallel insert and delete in this case) and database log performance are a key for this function and the SPARC T4-4 server outperformed both the IBM POWER7 server and HP ProLiant DL980 G7 server. (See the RF columns above.) See Also Transaction Processing Performance Council (TPC) Home Page Ideas International Benchmark Page SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Sun Storage 2540-M2 Array oracle.com OTN Disclosure Statement TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org. SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads.

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • Normal Redundancy (Double Mirroring) Option Available

    - by TammyBednar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} The Oracle Database Appliance 2.4 Patch was released last week and provides you an option of ASM normal redundancy (double mirroring) during the initial deployment of the Database Appliance. The default deployment of the Oracle Database Appliance is high redundancy for the +DATA and +RECO disk groups. While there is 12TB of raw shared storage available, the Database Backup Location and Disk Group Redundancy govern how much usable storage is presented after the initial deployment is completed. The Database Backup Location options are Local or External. When the Local Backup Option is selected, this means that 60% of the available shared storage will be allocated for the Fast Recovery Area that contains database backups and archive logs. The External Backup Option will allocate 20% of the available shared storage to the Fast Recovery Area. So, let’s look at an example of High Redundancy and External Backups. Disk Group Redundancy – High --> Triple Mirroring to provide ~4TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 3.2TB of usable storage, +RECO = 0.8TB of usable storage What about Normal Redundancy with External Backups? Disk Group Redundancy – Normal --> Double Mirroring to provide ~6TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 4.8TB of usable storage, +RECO = 1.2TB of usable storage As a best practice, we would recommend using Normal Redundancy for your test and/or development Oracle Database Appliances and High Redundancy for production.

    Read the article

  • DD-WRT/openwrt question (TP-Link WR1043N)

    - by Shiki
    Can I squeeze more speed out of my router (when it comes to USB attached storage device on it) with open/DD wrt? (Sorry I don't really know such firmwares.) (Guess it works with ntfs-3g? I don't know.) Feel free to make this a real question. Basically the question: Does the change worth it in the terms of speed?

    Read the article

  • What is the best SOHO NAS currently available?

    - by VinceJS
    What is the "best" Small Office Home Office (SOHO) Network Attached Storage (NAS) device available? Best performance vs. cost that is! I am looking for one that I can use at home to safely store my pictures, videos. What features should I look for? There are so many NAS reviews on the web, how do you choose the right one?

    Read the article

  • What is the best SOHO NAS currently available?

    - by VinceJS
    What is the "best" Small Office Home Office (SOHO) Network Attached Storage (NAS) device available? Best performance vs. cost that is! I am looking for one that I can use at home to safely store my pictures, videos. What features should I look for? There are so many NAS reviews on the web, how do you choose the right one?

    Read the article

  • Command View EVA Login Problem

    - by ngadimin
    I'm using Command View EVA version 9.01 on a Windows Storage Server 2003 R2. And all of a sudden I can't log in to the command view, it always say incorrect username or password, I haven't done any change on the password nor the system. Is there any way I can fix this?

    Read the article

  • Backup the Windows user folder in the cloud?

    - by Benjamin
    As I understand it, Google Drive and Dropbox, the two cloud storage providers I happen to know, can only sync a predefined folder that is created upon installation. I'd be happy to have an automated synchronisation of my folders in the cloud, but I'm not ready to change my habits, and start saving all my documents in the folder imposed by the provider. Is it possible with one of these, or any other you might know, to sync the full Windows user folder instead?

    Read the article

  • ZFS vs XFS

    - by DrJokepu
    We're considering building a ~16TB storage server. At the moment, we're considering both ZFS and XFS as filesystem. What are the advantages, disadvantages? What do we have to look for? Is there a third, better option?

    Read the article

  • Sync my files across multiple computers

    - by EnderMB
    I do a lot of work on my home computer, ranging from programming, writing stored procedures and writing documentation and reporting. A lot of this work is university related and constantly swapping files across several computers is annoying at best. I have a large final-year project coming up and I'm going to be sharing this work amongst home and university and require some kind of online storage that provides version control for my programs, as well as my Word documents, PDF's and saved academic papers. Are there any good solutions for my problem?

    Read the article

  • How to compress / reduce the size of JPEG photos for archiving?

    - by Timo Huovinen
    I have 50,000 high resolution JPEG photos, where a couple of them might occasionally be needed, about once a year. I wanted to zip them to save disk space, except that zipping gives no space benefit - so trying to reduce the images disk usage using winzip, winrar or 7zip was not successful. Is there any software or algorithm similar to zip to compress image size on hard disk for storage without loosing any image information?

    Read the article

  • Is this common practice for disk quotas on virtual dedicated servers?

    - by Louis
    Hello, I was a bit surprised when I purchased a VPS with 15GB of storage to find that am left with very little space after Windows' 13GB footprint. I can't even install SQL Server. Tech support is saying this is normal. I know that if they felt like it they could remove Windows from the quota or adjust it accordingly. Is this a common practice or should I further pursue the issue with customer service?

    Read the article

  • Maximum number of hard drives in a build-your-own NAS solution [closed]

    - by groovehunter
    My IT department has a bunch of older 160/320GB Drives. I'd like to use them in a build-your-own NAS device. What limitations exist in regards to the maximum number of drives that can be connected to typical commodity hardware that might be used in a situation like this? EDIT okay I like to specify my question is what to search for to find a storage controller which can handle many drives. I simply cannot find the right search terms.

    Read the article

  • what NAS cellera services to be monitered?

    - by wildchild
    We have a monitoring tool called SCOM which mainly monitors different OS related services .However, we being part of storage team would also like some of our services to be monitored. We have HP NAS , and I am wondering what all services I can ask the other team to monitor for us and alert us if something goes wrong. The same goes with celerra and centera what important services can be monitored .I did search but to no avail.I ‘m not finding any of the useful services..Any help in this regard is greatly appreaciated thanks!

    Read the article

  • What's your opinion in Amazon S3 ?

    - by Space Cracker
    i searching for best online file storage and i found a lot with different features ... i feel that Amazon S3 is the best ... Could any who try such a service give me his opinion on it and if is there any others that are most valuable Amazon S3 ?

    Read the article

  • Does a HP MDS600 fit in a HP 10636 G2 rack?

    - by Jasmine Lognnes
    In the Quick Installation Giide to the MDS600/D6000 storage server it says IMPORTANT: Some racks other than the HP Rack 10000 Series rack do not allow full access to hard drive bays 29-35 in hard drive drawer 2. I have a HP 10636 G2 19" Rack 36U (PN: AF011A). Question It is temping to think that the rack is supported, but can anyone comfirm that 10000 Series to HP means 10xxx, so my rack is supported?

    Read the article

  • How should we serve files in a small bioinformatics cluster?

    - by cespinoza
    We have a small cluster of six ubuntu servers. We run bioinformatics analyses on these clusters. Each analysis takes about 24 hours to complete, each core i7 server can handle 2 at a time, takes as input about 5GB data and outputs about 10-25GB of data. We run dozens of these a week. The software is a hodgepodge of custom perl scripts and 3rd party sequence alignment software written in C/C++. Currently, files are served from two of the compute nodes (yes, we're using compute nodes as file servers)-- each node has 5 1TB sata drives mounted separately (no raid) and is pooled via glusterfs 2.0.1. They each have as 3 bonded intel ethernet pci gigabit ethernet cards, attached to a d-link DGS-1224T switch ($300 24 port consumer-level). We are not currently using jumbo frames (not sure why, actually). The two file-serving compute nodes are then mirrored via glusterfs. Each of the four other nodes mounts the files via glusterfs. The files are all large (4gb+), and are stored as bare files (no database/etc) if that matters. As you can imagine, this is a bit of a mess that grew organically without forethought and we want to improve it now that we're running out of space. Our analyses are I/O intensive and it is a bottle neck-- we're only getting 140mB/sec between the two fileservers, maybe 50mb/sec from the clients (which only have single NICs). We have a flexible budget which I can probably get up $5k or so. How should we spend our budget? We need at least 10TB of storage fast enough to serve all nodes. How fast/big does the cpu/memory of such a file server have to be? Should we use NFS, ATA over Ethernet, iSCSI, Glusterfs, or something else? Should we buy two or more servers and create some sort of storage cluster, or is 1 server enough for such a small number of nodes? Should we invest in faster NICs (say, PCI-express cards with multiple connectors)? The switch? Should we use raid, if so, hardware or software? and which raid (5, 6, 10, etc)? Any ideas appreciated. We're biologists, not IT gurus.

    Read the article

  • Making many network shares appear as one

    - by jimbojw
    Givens: disk is cheap, and there's plenty lying around on various computers around the corporate intranet redundant contiguous large storage volumes are expensive Problem: It would be fantastic to have a single entry point (drive letter, network path) that presents all this space as one contiguous filesystem, effectively abstracting the disk and network architecture from the paths presented to users. Does anyone know how to implement such a solution? I'm open to Windows and non-windows solutions, free and proprietary.

    Read the article

  • System.ComponentModel.Win32Exception Message: Not enough storage is available to process this comman

    - by george9170
    short of restarting our server, is there anyway we can get this memory released and our website up and running. Below is a trace from the event viewer An unhandled exception occurred and the process was terminated. Application ID: /LM/W3SVC/17/ROOT Process ID: 14352 Exception: System.ComponentModel.Win32Exception Message: Not enough storage is available to process this command StackTrace: at MS.Win32.UnsafeNativeMethods.RegisterClassEx(WNDCLASSEX_D wc_d) at MS.Win32.HwndWrapper..ctor(Int32 classStyle, Int32 style, Int32 exStyle, Int32 x, Int32 y, Int32 width, Int32 height, String name, IntPtr parent, HwndWrapperHook[] hooks) at MS.Win32.MessageOnlyHwndWrapper..ctor() at System.Windows.Threading.Dispatcher..ctor() at System.Windows.Threading.Dispatcher.get_CurrentDispatcher() at ISC.MapDotNetServer.MapPrintSupport.BaseTileRequestorResolver.b__0() at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • Stack data storage order

    - by Jamie Dixon
    When talking about a stack in either computing or "real" life we usually assume a "first on, last off" type of functionality. Because the idea of a stack is based around something in the physical world, does it matter how the data in the stack is stored? I notice in a lot of examples that the storage of the stack data is quite often done using an array and the newest item added to the stack is placed at the bottom of the array. (like adding a new plate to an existing stack of plates except putting it underneath the other plates rather than on top). As a paradigm, does it matter in what order the data is stored within the stack as long as the operation of the stack acts as expected?

    Read the article

  • Azure Table Storage data Consistency

    - by SeeR
    Let say I have Table in Azure Table Storage public class MyTable { public string PK {get; set;} public string RowPK {get; set;} public double Amount {get; set;} } And message in Azure Queue which says Add 10 to Amount. Now let say one worker role Takes this message from queue Takes row from table Amount += 10 Updates Row in Table And Fails After a while message is available in queue again. So next worker role: Takes this message from queue Takes row from table Amount += 10 Updates Row in Table Removes message from queue Those actions results in Amount += 20 instead of Amount += 10. How can I avoid such situations?

    Read the article

  • Can't create blob container on Azure Blob Storage

    - by desautelsj
    The following code throws an error on the "CreateIfNotExist" method call. I am attempting to connect to my Azure Blob storage and create a new container called "images" var storageAccount = new CloudStorageAccount( new StorageCredentialsAccountAndKey("my_account_name", "... my shared key ..."), "https://blob.core.windows.net/", "https://queue.core.windows.net/", "https://table.core.windows.net/" ); var blobClient = storageAccount.CreateCloudBlobClient(); var blobContainer = blobClient.GetContainerReference("images"); blobContainer.CreateIfNotExist(); The error is: [StorageClientException: The requested URI does not represent any resource on the server.] The "images" container does not exist but I was expecting it to be created instead of an error to be thrown. What am I doing wrong? I have tried HTTP instead of HTTPS but the result is the same error.

    Read the article

  • Boost Thread Specific Storage Question (boost/thread/tss.hpp)

    - by Hassan Syed
    The boost threading library has an abstraction for thread specific (local) storage. I have skimmed over the source code and it seems that the TSS functionality can be used in an application with any existing thread regardless of weather it was created from boost::thread --i.e., this implies that certain callbacks are registered with the kernel to hook in a callback function that may call the destructor of any TSS objects when the thread or process is going out of scope. I have found these callbacks. I need to cache HMAC_CTX's from OpenSSL inside the worker threads of various web-servers (see this, detailed, question for what I am trying to do), and as such I do not controll the life-time of the thread -- the web-server does. Therefore I will use the TSS functionality on threads not created by boost::thread. I just wanted to validate my assumptions before I started implementing the caching logic, are there any flaws in my logic ?

    Read the article

  • iCloud + Storage of media in iPhone Documents folder

    - by Michael Morrison
    I, like many developers, got an email from Apple recently that stated we should move our data from the documents directory into another folder to permit more streamlined backup to iCloud. In recent testing it appears that [your app] stores a fair amount of data in its Documents folder. Since iCloud backups are performed daily over Wi-Fi for each user's iOS device, it's important to ensure the best possible user experience by minimizing the amount of data being stored by your app. Marco Arment, of instapaper fame, has a good take on the issue, which is that the recommended location for storing downloadable files is in /Library/Caches. However, the problem is that both /tmp and /Caches can be 'cleaned' anytime the OS decides that the device is running low on storage. If your app is cleaned then the data downloaded by your app and stored by your user is gone. Naturally, the user will blame you and not Apple. What to do?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >