Search Results

Search found 13664 results on 547 pages for 'storage engine'.

Page 113/547 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • How Is A SAN Storage Device Managed? [closed]

    - by slickboy
    Apologies if this is a stupid question but my only previous experience with SAN technology is using a SAN virtualisation tool (StarWind iSCSI SAN Free). This tool comes with a management interface that allows for iSCSI targets to be configured on the simulated SAN that can then be accessed. My question is basically: How is physical SAN device managed? Is an operating system required to be installed on the SAN device and then managed via software? Or can it be managed remotely via an application with no OS on the device? Thanks for any help.

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

  • Sun Grid Engine : jobs are not well balanced

    - by GlinesMome
    I use Open Grid Scheduler (a fork/copy of Sun Grid Engine). I have tried this configuration from master: # qconf -mattr exechost complex_values slots=8 slave2 # qconf -mq all.q | grep slots slots 100,[slave1=1],[slave2=8] slave1 is down, then I run 10 qsub with a sleep example (so no CPU consumption) but only 4 jobs are run at the same time on slave2 instead of I have put 8 slots. What does I missed ? PS: my goal is to provide infinite slots to force SGE to schedule only via consummable ressources.

    Read the article

  • Not enough storage is available to process this command

    - by Mohit
    I am getting this error on almost all of the operations on a Windows 7 pro 32 bit machine. By operations I mean anything I do. Update a repo from subversion. Access a local IIS Site. Copy a big folder. Run an installer.and sometime if I try again. It get solved. I think there is something wrong wit windows7 . I searched around and found posts suggesting to increase IRPStackSize value in registry I did that no Luck. I am using Microsoft Security Essentials Version: 1.0.1961.0 as my antivirus package Once this errors starts popping up. I have to restart and then in after some random time. It starts showing up again. Any help is appreciated. I am losing lot of my time in restarting my system or retrying again and again.

    Read the article

  • Data transfer speed to USB storage connected to wifi router very slow

    - by RonakG
    Here is my setup. A Linksys Cisco E3200 wifi router. A MacbookPro running OS X Lion 10.7.4. A Seagate GoFlex 1TB hard drive connected to wifi router via the USB port. When I try to transfer data from my MBP to the HDD, the data transfer rate is very low. I'm getting around 3MB/s write speed. This is very slow compared to the speed I get when HDD is directly connected to the MBP. The HDD is NTFS formatted. And the router provides access to HDD using Samba share. So I connect to the HDD using smb://. What is the limiting factor here affecting the data transfer rate?

    Read the article

  • migrating storage to a different controller

    - by bellocarico
    Hello, I've just purcheased a couple of adaptec controller (2405/5405) for my ESXi 4.0 U1 servers. Currently ESXi and a couple of VMs are hosted on single sata boot disk connected to a nvidia on board non-RAID controller. I know that it's possible to migrate from single disk to RAID 1 with adaptec and I'm pleased with that, but I'm not sure if ESXi has already the right drivers installed/loaded for this controller. Is there any way I can check this? Is ESXi clever enough to recognize the new hardware and load the right module? Thanks

    Read the article

  • Storage of various linux config files

    - by stantona
    I'm using git to track/store all my various config files required for linux. They're organized as if they live in my home directory, eg: .Xresources .config/ Awesome rc.lua .xmodmap .zshrc vim/ <- submodule emacs/ <- submodule etc I use git submodules for other things like vim/emacs configuration (since I also want to keep those separate repos). I'm thinking of creating a shell script to create the various links to these files. The goal is to make it easier to setup another linux painlessly. Is this a reasonable idea? Is there a preferred approach? I'm mostly interested in hearing how others people store their configs.

    Read the article

  • Installing windows xp from a removable storage

    - by objectiveME
    Hello,i want to install windows xp from my flash disk but upon running DISKPART and LIST PART my flash disk is not being listed.I soon find out its not supported on another forum.I tried unet bootin but i dont want to go that route.I have learn't that i can make my flash disk appear a hard drive but i am not sure since i have never done this before. I am following this tutorial: http://www.intowindows.com/how-to-install-windows-7-rc-on-acer-aspire-one-netbook/ If i present a usb stick as a hard drive,will i be able to install xp now?.

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • Running RAID on an internal storage drive

    - by Johnny W
    I am running Windows 8 on an SSD, and it's all running swimmingly, but I want to put my documents on a "normal" HD running under RAID 1. I have four SATA 3GB/s ports on my motherboard (my Windows 8 SSD drive in on a different 6GB/s controller). All four are used (1 Bluray Optical Drive, 1 Spare HD, and the 2 I wish to turn into a RAID 1 drive. In my BIOS I can only change settings for the entire controller, not just ports. So my question is: If I turn these four ports into a RAID controller, will that negatively affect the non-RAID hardware plugged into it? I.e. Will a HD or Bluray drive be slower/incompatible with being plugged into a non AHCI SATA port? Thanks.

    Read the article

  • Linux data storage and partitioning

    - by Rajeev
    In the following output of df -h you can see that i have added a new hard drive(/dev/hdd1) and have mounted as /hdd1. My question is if I start dumping data to /opt will that data be mounted in /hdd1 or / My goal is to utilise the new hdd1 instead of old disk(/dev/sda3). How can this be done? Filesystem Size Used Avail Use% Mounted on /dev/sda3 442G 312G 12G 86% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 194M 57M 128M 31% /boot /dev/sdb1 1.7T 201M 2.6T 1% /hdd1

    Read the article

  • E2K7: Can't add jounaling mailbox to storage group

    - by Agent
    We just added a new SG to a standalone E2K7 server but Exchange pops an error when we try to enable journaling on it. The journal recipient mailbox was created a while back and as far as we can tell there are no issues with it. It's viewable in the GAL and accessible through OWA. The error is: Set-Mailboxdatabase Error:Obkect "domain.company.com/Journaling/Journal5" could not be found. Please make sure that it was spelled correctly or specify a different object. This Journal5 account is in a child domain (like all the other journaling mailboxes we have attached to other mailbox server SGs). We also tried attaching one of the working journaling mailboxes to this SG but they also popped back the same error. Event log is now showing any errors at all. We are running SP1 on this server. I've tried dismounting/remounting the store and bouncing the IS and SA services but that didn't help any. Any suggestions?

    Read the article

  • Password History Storage and Variability Comparison

    - by z3ke
    I believe this situation would be similar to many others out there, so maybe some of you can shed some light... Supposedly, when making password changes through MS exchange every 90 days, you cannot use any simple variation of one of your old passwords, up to whatever limit the admin's set for a system. My question: If your previous passwords are only stored as hashes, how can they check for the "just changed one letter" case. Wouldn't they have to have access to the old plain-text passwords in order to make those comparisons? The only other thing I can think of is if upon original creation of a password, they also stored all other one character permutations of it, so that they can be banned later?

    Read the article

  • Demantra 7.3.1.3 Controlling MDP_MATRIX Combinations Assigned to Forecasting Tasks Using TargetTaskSize

    - by user702295
    New 7.3.1.3 parameter: TargetTaskSize Old parameter: BranchID  Multiple, deprecated  7.3.1.3 onwards Parameter Location: Parameters > System Parameters > Engine > Proport   Default: 0   Engine Mode: Both   Details: Specifies how many MDP_MATRIX combinations the analytical engine attempts to assign to each forecasting task.  Allocation will be affected by forecsat tree branch size.  TaskTargetSize is automcatically calculated.  It holds the perferred branch size, in number of combinations in the lowest level. This parameter is adjusted to a lower value for smaller schemas, depending on the number of available engines.   - As the forecast is generated the engine goes up the tree using max_fore_level and not top_level -1.  Max_fore_level has     to be less than or equal to top_level -1.  Due to this requirement, combinations falling under the same top level -1     member must be in the same task.  A member of the top level -1 of the forecast tree is known as a branch.  An engine     task is therefore comprised of one or more branches.     - Reveal current task size       go to Engine Administrator --> View --> Branch Information and run the application on your Demantra schema.  This will be deprecated in 7.3.1.3 since there is no longer a means of adjusting the brach size directly.  The focus is now on proper hierarchy / forecast design.     - Control of tasks       The number of tasks created is the lowest of number of branches, as defined by top level -1 members in forecast       tree, and engine sessions and the value of TargetTaskSize.  You are used to using the branch multiplier in this       calculation.  As of 7.3.1.3, the branch ID multiple is deprecated.     - Discovery of current branch size       To resolve this you must review the 2nd highest level in the forecast tree (below highest/highest) as this is the       level which determines the size of the branches.  If a few resulting tasks are too large it is recommended that       the forecast tree level driving branches be revised or at times completely removed from the forecast tree.     - Control of foreacast tree branch size         - Run the following sql to determine how even the branches are being split by the engine:             select count(*),branch_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by branch_id;             This will give you an understanding if some of the individual branches have an unusually large number of           rows and thus might indicate that the engine is not efficiently dividing up the parallel tasks.         - Based on the results of this sql, we may want to adjust the branch id multiplier and/or the number of engines           (both of these settings are found in the Engine Administrator)           select count(*), level_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by level_id;           This will give us an understanding at which level of the Forecast tree where the forecast is being generated.            Having a majority of combinations higher on the forecast tree might indicate either a poorly designed forecast           tree and/or engine parameters that are too strict           Based on the results of this we would adjust the Forecast Tree to see if choosing a different hierarchy might           produce a forecast, with more combinations, at a lower level.           For example:             - Review the 2nd highest level in the forecast tree, below highest/highest, as this is the level which               determines the size of the branches.             - If a few resulting tasks are too large it is recommended that the forecast tree level driving branches               be revised or at times completely removed from the forecast tree.               - For example, if the highest level of the forecast tree is set to Brand/All Locations.             - You have 10 brands but 2 of the brands account for 67% and 29% of all combinations.             - There is a distinct possibility that the tasks resulting from these 2 branches will be too large for               a single engine to process.  Some possible solutions could be to remove the Brand level and instead               use a different product grouping which has a more even distribution, possibly Product Group.               - It is also possible to add a location dimension to this forecast tree level, for example Customer.                This will also reduce forecast tree branch size and will deliver a balanced task allocation.             - A correctly configured Forecast Tree is something that is done by the Implementation team and is               not the responsibility of Oracle Support.  Allocation will be affected by forecast tree branch size.  When TargetTaskSize is set to 0, the default value, the system automatically calculates a value for 'TargetTaskSize' depending on the number of engines.   - QUESTION:  Does this mean that if TargetTaskSize is 1, we use tree branch size to allocate branches to tasks instead                of automatically calculating the size?     ANSWER: DEV Strongly recommends that the setting of TargetTaskSize remain at the DEFAULT of ZERO (0).   - How to control the number of engines?     Determine how many CPUs are on the machine(s) that is (are) running the engine.  As mentioned earlier, the general     rule is that you should designate 2 engines per each CPU that is available.  So for example, if you are running the     engine on a machine that has 4 CPU then you can have up to 8 engines designated in the Engine Administrator.  In this     type of architecture then instead of having one 'localhost' in your Engine Settings Screen, you would have 'localhost'     repeated eight times in this field.     Where do I set the number of engines?                 To add multiples computers where engine will run, please do a back-up of Settings.xml file under         Analytical Engines\bin\ folder, then edit it and add there the selected machines.                 Example, this will allow 3 engines to start:         - <Entry>           <Key argument="ComputerNames" />           <Value type="string" argument="localhost,localhost,localhost" />           </Entry Otherwise, if there are no additional engines defined, the calculated value of 'TargetTaskSize' is used. (Oracle does not recommend changing the default value.) The TargetTaskSize holds the engines prefered branch size, in number of level 1 combinations.   - Level 1 combinations, known as group size The engine manager will use this parameter to attempt creating branches with similar size.   * The engine manager will not create engines that do not have a branch. The engine divider algorithm uses the value of 'TargetTaskSize' as a system-preferred branch size to create branches that are more equal in size which improves engine performance.  The engine divider will try to add as many tasks as possible to an existing branch, up to the limit of 'TargetTaskSize' level 1 combinations, before adding new branches. Coming up next: - The engine divider - Group size - Level 1 combinations - MAX_FORE_LEVEL - Engine Parameters  

    Read the article

  • Java Meta Search Engine API

    - by Loki
    I'm currently researching Java libraries to help in building a meta type search engine in the sense of being able to replace any given search engine in the back-end of the application or to simultaneously search using multiple search engines. I'm not interested in the GUI part here, just the generalization of search engine APIs and usage. I'd like to know about the common libraries used to achieve this task and if there are any common patterns used in this case. I imagined that this problem is common enough to be able to find plenty of stuff on Google, but it seems like search is a very proprietary domain and not much information is fed back to the community.

    Read the article

  • Extracting a Rails application into a plugin or engine

    - by Globalkeith
    I have a Rails 2.3 application which I would like to extract into a plugin, or engine. The application has user authentication, and basic cms capabilities supported by ancestry plugin. I want to extract the logic for the application into a plugin/engine so that I can use this code for future projects, with a different "skin" or "theme" if required. I'm not entirely sure I actually understand the difference between plugin and engine concepts, so that would be a good first point. What is the best approach, are there any good starting points, links, explanations, examples that I should follow. Also, with the release of R3 to consider, is there anything that I should be aware of for that, with regards to plugins etc. I am going to start off by watching Ryan's http://railscasts.com/episodes/149-rails-engines but obviously thats over a year old now, so one of the challenges I'm faced with is finding the most up to date and relevant information on this subject. All tips and help gratefully received.

    Read the article

  • Templating Engine to Generate Simple Reports in .NET

    - by dr. evil
    I'm looking for a free templating engine to generate simple reports. I want some basic features such as : Ability to Write Loops (with any IEnumerable) Passing Variables Passing Templates Files (main template, footer, header) I'll use this to generate reports in HTML and XML. I'm not looking for a ASP.NET Template Engine. This is for a WinForms applications. I've seen this question http://stackoverflow.com/questions/340095/can-you-recommend-a-net-template-engine, however all of those template engines are total overkill for me and focused for ASP.NET. Please only recommend free libraries. // I'm still looking an NVelocity but it doesn't look any promising for .NET, overly complicated, when you download it's bunch of files not clear what to do, no tutorial, startup document etc.

    Read the article

  • Should I try to write simple key-value storage by myself?

    - by shabunc
    I need a key-value storage in a simplest form we can think of. Keys should be some fixed-length strings, values should be some texts. This key-value storage should have an HTTP-backed API. That's basically it. As you can see, there is no big difference between such storage and some web application with some upload functionality. The thing is - it'll take few hours (including tests and coffee drinking) to write something like this. "Something like this" will be fully under my control and can be tuned on demand. Should I, in this specific case, not try to reinvent bicycles? Is it better to use some of existing NoSQL solutions. If yes, which one exactly? If, say, I'd needed something SQL-like, I won't ask and won't try to write something by myself. But with NoSQL I just don't know what is adequate and what is not.

    Read the article

  • java.lang.Error: "Not enough storage is available to process this command" when generating images

    - by jhericks
    I am running a web application on BEA Weblogic 9.2. Until recently, we were using JDK 1.5.0_04, with JAI 1.1.2_01 and Image IO 1.1. In some circumstances (we never figured out exactly why), when we were processing large images (but not that large - a few MB), the JVM would crash without any error message or stack trace or anything. This didn't happen much in production, but enough to be a nuisance and eventually we were able to reproduce it. We decided to switch to JRockit90 1.5.0_04 and we were no longer able to reproduce the problem in our test environment, so we thought we had it licked. Now, however, after the application server has been up for a while, we start getting the error message, "Not enough storage is available to process this command" during image operations. For example: java.lang.Error: Error starting thread: Not enough storage is available to process this command. at java.lang.Thread.start()V(Unknown Source) at sun.awt.image.ImageFetcher$1.run(ImageFetcher.java:279) at sun.awt.image.ImageFetcher.createFetchers(ImageFetcher.java:272) at sun.awt.image.ImageFetcher.add(ImageFetcher.java:55) at sun.awt.image.InputStreamImageSource.startProduction(InputStreamImageSource.java:149) at sun.awt.image.InputStreamImageSource.addConsumer(InputStreamImageSource.java:106) at sun.awt.image.InputStreamImageSource.startProduction(InputStreamImageSource.java:144) at sun.awt.image.ImageRepresentation.startProduction(ImageRepresentation.java:647) at sun.awt.image.ImageRepresentation.prepare(ImageRepresentation.java:684) at sun.awt.SunToolkit.prepareImage(SunToolkit.java:734) at java.awt.Component.prepareImage(Component.java:3073) at java.awt.ImageMediaEntry.startLoad(MediaTracker.java:906) at java.awt.MediaEntry.getStatus(MediaTracker.java:851) at java.awt.ImageMediaEntry.getStatus(MediaTracker.java:902) at java.awt.MediaTracker.statusAll(MediaTracker.java:454) at java.awt.MediaTracker.waitForAll(MediaTracker.java:405) at java.awt.MediaTracker.waitForAll(MediaTracker.java:375) at SfxNET.System.Drawing.ImageLoader.loadImage(Ljava.awt.Image;)Ljava.awt.image.BufferedImage;(Unknown Source) at SfxNET.System.Drawing.ImageLoader.loadImage(Ljava.net.URL;)Ljava.awt.image.BufferedImage;(Unknown Source) at Resources.Tools.Commands.W$zw(Ljava.lang.ClassLoader;)V(Unknown Source) at Resources.Tools.Commands.getContents()[[Ljava.lang.Object;(Unknown Source) at SfxNET.sfxUtils.SfxResourceBundle.handleGetObject(Ljava.lang.String;)Ljava.lang.Object;(Unknown Source) at java.util.ResourceBundle.getObject(ResourceBundle.java:320) at SoftwareFX.internal.ChartFX.wxvw.yxWW(Ljava.lang.String;Z)Ljava.lang.Object;(Unknown Source) at SoftwareFX.internal.ChartFX.wxvw.vxWW(Ljava.lang.String;)Ljava.lang.Object;(Unknown Source) at SoftwareFX.internal.ChartFX.CommandBar.YWww(LSoftwareFX.internal.ChartFX.wxvw;IIII)V(Unknown Source) at SoftwareFX.internal.ChartFX.Internet.Server.xxvw.YzzW(LSoftwareFX.internal.ChartFX.Internet.Server.ChartCore;Z)LSoftwareFX.internal.ChartFX.CommandBar;(Unknown Source) at SoftwareFX.internal.ChartFX.Internet.Server.xxvw.XzzW(LSoftwareFX.internal.ChartFX.Internet.Server.ChartCore;)V(Unknown Source) at SoftwareFX.internal.ChartFX.Internet.Server.ChartCore.OnDeserialization(Ljava.lang.Object;)V(Unknown Source) at SoftwareFX.internal.ChartFX.Internet.Server.ChartCore.Zvvz(LSoftwareFX.internal.ChartFX.Base.wzzy;)V(Unknown Source) Has anyone seen something like this before? Any clue what might be happening?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >