Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 25/1338 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Why is that SQL Server Instance under stress?

    There are several reliable indications, using SQL Queries, of what is causing SQL Server performance problems. Some of these are fairly obvious, but others aren't. Grant shows how you can get clues from any SQL Server as to the cause of stress. Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • Storage Technology for the Home User

    <b>Linux Magazine:</b> "Sometimes you just have to get excited about what you can buy, hold in your hand, and use in your home machines. Let's look at some cool storage technology that the average desktop user can tackle."

    Read the article

  • ??????Oracle Automatic Storage Management???·????????

    - by Yusuke.Yamamoto
    ????? ??:2010/03/01 ??:???? Oracle Database ?????? Automatic Storage Management(ASM) ? Oracle Database 10g ?????????Oracle ASM ? Oracle Database ?????????????·????????????·????????????????????????????????????????Oracle ASM ????????????????·?????????????????????????? ??????·???????????????·??????????????????????ASRU ???????ASRU ???????????? ????????? ????????????????? http://www.oracle.com/technetwork/jp/database/1005200-oracle-asm-and-tr-321865-ja.pdf

    Read the article

  • Using singleton instead of a global static instance

    - by Farstucker
    I ran into a problem today and a friend recommended I use a global static instance or more elegantly a singleton pattern. I spent a few hours reading about singletons but a few things still escape me. Background: What Im trying to accomplish is creating an instance of an API and use this one instance in all my classes (as opposed to making a new connection, etc). There seems to be about 100 ways of creating a singleton but with some help from yoda I found some thread safe examples. ..so given the following code: public sealed class Singleton { public static Singleton Instance { get; private set; } private Singleton() { APIClass api = new APIClass(); //Can this be done? } static Singleton() { Instance = new Singleton(); } } How/Where would you instantiate the this new class and how should it be called from a separate class? EDIT: I realize the Singleton class can be called with something like Singleton obj1 = Singleton.Instance(); but would I be able to access the methods within the APIs Class (ie. obj1.Start)? (not that I need to, just asking) EDIT #2: I might have been a bit premature in checking the answer but I do have one small thing that is still causing me problems. The API is launching just fine, unfortunately Im able to launch two instances? New Code public sealed class SingletonAPI { public static SingletonAPI Instance { get; private set; } private SingletonAPI() {} static SingletonAPI() { Instance = new SingletonAPI(); } // API method: public void Start() { API myAPI = new API();} } but if I try to do something like this... SingletonAPI api = SingletonAPI.Instance; api.Start(); SingletonAPI api2 = SingletonAPI.Instance; // This was just for testing. api2.Start(); I get an error saying that I cannot start more than one instance.

    Read the article

  • How to terminate a particular Azure worker role instance

    - by Oliver Bock
    Background I am trying to work out the best structure for an Azure application. Each of my worker roles will spin up multiple long-running jobs. Over time I can transfer jobs from one instance to another by switching them to a readonly mode on the source instance, spinning them up on the target instance, and then spinning the original down on the source instance. If I have too many jobs then I can tell Azure to spin up extra role instance, and use them for new jobs. Conversely if my load drops (e.g. during the night) then I can consolidate outstanding jobs to a few machines and tell Azure to give me fewer instances. The trouble is that (as I understand it) Azure provides no mechanism to allow me to decide which instance to stop. Thus I cannot know which servers to consolidate onto, and some of my jobs will die when their instance stops, causing delays for users while I restart those jobs on surviving instances. Idea 1: I decide which instance to stop, and return from its Run(). I then tell Azure to reduce my instance count by one, and hope it concludes that the broken instance is a good candidate. Has anyone tried anything like this? Idea 2: I predefine a whole bunch of different worker roles, with identical contents. I can individually stop and start them by switching their instance count from zero to one, and back again. I think this idea would work, but I don't like it because it seems to go against the natural Azure way of doing things, and because it involves me in a lot of extra bookkeeping to manage the extra worker roles. Idea 3: Live with it. Any better ideas?

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • How can I remove all drivers and other files related to a USB Mass Storage device?

    - by Bob
    I have a flash drive here that does not work on one OS on computer - let's call it the desktop Windows 7. It works fine on another computer - laptop Windows 7. It also works fine on Windows 8 on the same desktop computer. Other flash drives work fine under desktop Windows 7. So not a hardware issue, not a generic USB Mass Storage driver issue. It's something specific to this drive. On desktop Windows 7, I can connect the drive but no volume comes up under Windows Explorer. Ditto for Disk Management. With diskpart, loading hangs until I unplug the drive, if I replug it and try list disk it hangs again. If I unplug the drive at this point, list disk prints out all attached drives - including the just removed flash drive. The drive consistently appears under Device Manager, but uninstalling the drivers, restarting and reinstalling the drivers (by inserting the drive) only works for the first insertion. After that it fails again. I get the feeling that the driver files are not actually removed, and are corrupted, meaning every reinstall it's the same corrupted drivers being installed. Is there any way to remove these drivers completely? Or perhaps some other setting Windows 7 retains? Formatting the drive through another computer/OS does not help. I've also tried a complete wipe and rebuild of the MBR and single partition. The allocation unit size makes no difference; neither does a NTFS format. This is a relatively small matter, and I would not like to reinstall the entire OS!

    Read the article

  • Why doesn't my UIViewController class keep track of an NSArray instance variable.

    - by TaoStoner
    Hey, I am new to Objective-C 2.0 and Xcode, so forgive me if I am missing something elementary here. Anyways, I am trying to make my own UIViewController class called GameView to display a new view. To work the game I need to keep track of an NSArray that I want to load from a plist file. I have made a method 'loadGame' which I want to load the correct NSArray into an instance variable. However it appears that after the method executes the instance variable loses track of the array. Its easier if I just show you the code.... @interface GameView : UIViewController { IBOutlet UIView *view IBOutlet UILabel *label; NSArray *currentGame; } -(IBOutlet)next; -(void)loadDefault; ... @implementation GameView - (IBOutlet)next{ int numElements = [currentGame count]; int r = rand() % numElements; NSString *myString = [currentGame objectAtIndex:(NSUInteger)r]; [label setText: myString]; } - (void)loadDefault { NSDictionary *games; NSString *path = [[NSBundle mainBundle] bundlePath]; NSString *finalPath = [path stringByAppendingPathComponent:@"Games.plist"]; games = [NSDictionary dictionaryWithContentsOfFile:finalPath]; currentGame = [games objectForKey:@"Default"]; } when loadDefault gets called, everything runs perfectly, but when I try to use the currentGame NSArray later in the method call to next, currentGame appears to be nil. I am also aware of the memory management issues with this code. Any help would be appreciated with this problem.

    Read the article

  • Noob with git repository on Windows Storage Server 2008?

    - by HibbyHoo
    I have a Western Digital Sentinel at home running Windows Storage Server 2008 R2 Essentials. I have several git repositories on it for my own personal projects, and have no problem pushing and pulling over my local network. I want to be able to access those repos remotely from anywhere. I am able to log in and remotely access folders and files on it, but I cannot clone repos using the same address. It hangs for a REALLY long time before finally failing with an error: git.exe clone --progress -v "https://myIpAddressHere/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git" "D:\repo" Cloning into 'D:\repo'... error: Failed connect to myIpAddress:443; No error while accessing https://myIpAddress/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git/info/refs fatal: HTTP request failed git did not exit cleanly (exit code 128) I'm not too privy to networking or web development, and I have only a rudimentary understanding of how to use git (with TortoiseGit). I'm having a hard time finding search results for this specific problem and a hard time interpreting generic tutorials for the general scope of this problem. TortoiseGit version: 1.7.13.0. git version: 1.7.10.mysysgit.1.

    Read the article

  • New Project Starting. Got Gas?

    - by merrillaldrich
    “Storage is just like gasoline,” said a fellow DBA at the office the other day. This DBA, Mike is his name, is one of the smartest people I know, so I pressed him, in my subtle and erudite way, to elaborate. “Um, whut?” I said. “Yeah. Now that everything is shared – VMs or consolidated SQL Servers and shared storage – if you want to do a big project, like, say, drive to Vegas, you better fill the car with gas. Drive back and forth to work every day? Gas. Same for storage.” This was a light-bulb-above-my-head...(read more)

    Read the article

  • Attachments in Oracle BPM 11g – Create a BPM Process Instance by passing an Attachment

    - by Venugopal Mangipudi
    Problem Statement: On a recent engagement I had  a requirement where we needed to create BPM instances using a message start event. The challenge was that the instance needed to be created after polling a file location and attaching the picked up file (pdf) as an attachment to the instance. Proposed Solution: I was contemplating using process API to accomplish this,but came up with a solution which involves a BPEL process to pickup the file and send a notification to the BPM process by passing the attachment as a payload. The following are some of the brief steps that were used to build the solution: BPM Process to receive an attachment as part of the payload: The BPM Process is a very simple process which has a Message Start event that accepts the attachment as an argument and a Simple User Task that the user can use to view the attachment (as part of the OOTB attachment panel). The Input payload is based on AttachmentPayload.xsd.  The 3 key elements of the the payload are: <xsd:element name="filename" type="xsd:string"/> <xsd:element name="mimetype" type="xsd:string"/> <xsd:element name="content" type="xsd:base64Binary"/> A screenshot of the Human task data assignment that need to performed to attach the file is provided here. Once the process and the UI project (default generated UI) are deployed to the SOA server, copy the wsdl location of the process service (from EM). This WSDL would be used in the BPEL project to create the Instances in the BPM process after a file is polled. BPEL Process to Poll for File and create instances in the BPM process: For the BPEL process a File adapter was configured as a Read service (File Streaming option and keeping the Schema as Opaque). Once a location and the file pattern to poll are provided the Readservice Partner Link was wired to Invoke the BPEL Process. Also, using the BPM Process WSDL, we can create the Webservice reference and can invoke the start operation. Before we do the assignment for the Invoke operation, a global variable should be created to hold the value of the fileName of the file. The mapping to the global variable can be done on the Receive activity properties (jca.file.FileName).  So for the assign operation before we invoke the BPM process service, we can get the content of the file from the receive input variable and the fileName from the jca.file.FileName property. The mimetype needs to be hard coded to the mime-type of the file: application/pdf (I am still researching ways to derive the mime type as it is not available as part of the jca.file properties).  The screenshot of the BPEL process can be found here and the Assign activity can be found here. The project source can be found at the following location. A sample pdf file to test the project and a screenshot of the BPM Human task screen after the successful creation of the instance can be found here. References: [1] https://blogs.oracle.com/fmwinaction/entry/oracle_bpm_adding_an_attachment

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • Mounting a Mail Store that is in a Recovery Storage group On Exchange 2003

    - by Kyle Brandt
    If I have a production server with the Mail Store Foo in both the storage group companyName and the Recovery Storage Group, is it okay to Mount Foo in the RSG while it is mounted in companyName so I can extract some mailboxs from the recovery storage group? Basically I am wonder if it is okay to mount it in both Production and the Recovery Storage Groups while the mail server is in production and the particular mail store is in production. Reference: "Once an RSG is restored into and mounted up you can connect to it with ExMerge and read out mailboxes into PST files for merging back into a 'live' store" -- http://serverfault.com/questions/49728/test-restore-of-exchange-dbs-with-the-ms-exchange-plugin-of-netbackup-6

    Read the article

  • Unable to connect to EC2 instance after "reboot"

    - by KPL
    I am not able to connect to my m1.small instance after rebooting it. I have already associated the public IP with this instance. Upon checking the system log, this seems to be the issue: cloud-init-nonethttp://11.84: waiting 10 seconds for network device cloud-init-nonethttp://21.85: waiting 120 seconds for network device cloud-init-nonethttp://141.85: gave up waiting for a network device. Cloud-init v. 0.7.3 running 'init' at Sun, 18 May 2014 07:02:55 +0000. Up 142.54 seconds. ci-info: +++++++++++++++++++++++Net device info++++++++++++++++++++++++ ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | Device | Up | Address | Mask | Hw-Address | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . | ci-info: | eth0 | False | . | . | 02:43:xx:xx:xx:xx | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! A bunch of these follow the above message: 2014-05-18 07:02:56,178 - url_helper.pyWARNING: Calling http://169.254.169.254/2009-04-04/meta-data/instance-id failed 0/120s: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by : Errno 101] Network is unreachable) This is obviously related to the network interface not being working correctly. I have tried this so far: Relaunch a new instance from the custom AMI (created from EBS) of the failing instance. The same error shows up in the logs. Attach a new network interface to the EC2 instance. The error still persists. eth1 shows up in the list but the "up" column is False.

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • Should these concerns be separated into separate objects?

    - by Lewis Bassett
    I have objects which implement the interface BroadcastInterface, which represents a message that is to be broadcast to all users of a particular group. It has a setter and getter method for the Subject and Body properties, and an addRecipientRole() method, which takes a given role and finds the contact token (e.g., an email address) for each user in the role and stores it. It then has a getContactTokens() method. BroadcastInterface objects are passed to an object that implements BroadcasterInterface. These objects are responsible for broadcasting a passed BroadcastInterface object. For example, an EmailBroadcaster implementation of the BroadcasterInterface will take EmailBroadcast objects and use the mailer services to email them out. Now, depending on what BroadcasterInterface implementation is used to broadcast, a different implementation of BroadcastInterface is used by client code. The Single Responsibility Principle seems to suggest that I should have a separate BroadcastFactory object, for creating BroadcastInterface objects, depending on what BroadcasterInterface implementation is used, as creating the BroadcastInterface object is a different responsibility to broadcasting them. But the class used for creating BroadcastInterface objects depends on what implementation of BroadcasterInterface is used to broadcast them. I think, because the knowledge of what method is used to send the broadcasts should only be configured once, the BroadcasterInterface object should be responsible for providing new BroadcastInterface objects. Does the responsibility of “creating and broadcasting objects that implement the BroadcastInterface interface” violate the Single Responsibility Principle? (Because the contact token for sending the broadcast out to the users will differ depending on the way it is broadcasted, I need different broadcast classes—though client code will not be able to tell the difference.)

    Read the article

  • Web application and remote storage of files

    - by Matt
    Hi have a web application that can store lots and lots of files on the server. i.e. users upload data to it. The files are stored below a particular storage path. The web host will be an IBM xseries 345. However, the disks are really expensive so we would like to put the files onto a less expensive server. Now here is the question. Should I use an NFS mount on the IBM server of a path on the storage server? Or should I write some scripts to upload the files to the storage server instead. Both the storage server and the web host are on the same network. Only the web server is visible to the world. Is NFS performance suitable for an expected low to moderately loaded server?

    Read the article

  • Hosting several HTTP servers on single domain name

    - by Nakilon
    Several people have got a single domain name server.company.com server, where they are now supposed to host their infrastructure or temporal projects, written in different ways even in different programming languages. How do they divide the domain? Split into subdomains: john.server.company.com, kate.server.company.com, etc. This would need a lot of admins' assistance, time, etc. -- there would be no way for John and Kate to do it themselves. Split into url namespaces: server.company.com/john/, server.company.com/kate/, etc. Pro: They now can make a single welcome page at root with any additional info (if they need?) Con: Each server would need to know their namespace string constant, and hrefs like / whould need patching. Split into ports: server.company.com:8080, server.company.com:8081, etc. and make a single :80 welcome page. Pro: They still can make a single welcome page at :80 Con: ??? I would like to know more pros and cons for 2 and 3 solution.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >