Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 50/588 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Mounting a Mail Store that is in a Recovery Storage group On Exchange 2003

    - by Kyle Brandt
    If I have a production server with the Mail Store Foo in both the storage group companyName and the Recovery Storage Group, is it okay to Mount Foo in the RSG while it is mounted in companyName so I can extract some mailboxs from the recovery storage group? Basically I am wonder if it is okay to mount it in both Production and the Recovery Storage Groups while the mail server is in production and the particular mail store is in production. Reference: "Once an RSG is restored into and mounted up you can connect to it with ExMerge and read out mailboxes into PST files for merging back into a 'live' store" -- http://serverfault.com/questions/49728/test-restore-of-exchange-dbs-with-the-ms-exchange-plugin-of-netbackup-6

    Read the article

  • List of all states from COMPOSITE_INSTANCE, CUBE_INSTANCE, DLV_MESSAGE tables

    - by Deepak Arora
    In many of my engagements I get asked repeatedly about the states of the composites in 11g and how to decipher them, especially when we are troubleshooting issues around purging. I have compiled a list of all the states from the COMPOSITE_INSTANCE, CUBE_INSTANCE, and DLV_MESSAGE tables. These are the primary tables that are used when using BPEL composites and how they are used with the ECID.  Composite State Values COMPOSITE_INSTANCE States State Description 0 Running 1 Completed 2 Running with faults 3 Completed with faults 4 Running with recovery required 5 Completed with recovery required 6 Running with faults and recovery required 7 Completed with faults and recovery required 8 Running with suspended 9 Completed with suspended 10 Running with faults and suspended 11 Completed with faults and suspended 12 Running with recovery required and suspended 13 Completed with recovery required and suspended 14 Running with faults, recovery required, and suspended 15 Completed with faults, recovery required, and suspended 16 Running with terminated 17 Completed with terminated 18 Running with faults and terminated 19 Completed with faults and terminated 20 Running with recovery required and terminated 21 Completed with recovery required and terminated 22 Running with faults, recovery required, and terminated 23 Completed with faults, recovery required, and terminated 24 Running with suspended and terminated 25 Completed with suspended and terminated 26 Running with faulted, suspended, and terminated 27 Completed with faulted, suspended, and terminated 28 Running with recovery required, suspended, and terminated 29 Completed with recovery required, suspended, and terminated 30 Running with faulted, recovery required, suspended, and terminated 31 Completed with faulted, recovery required, suspended, and terminated 32 Unknown 64 - CUBE_INSTANCE States State Description 0 STATE_INITIATED 1 STATE_OPEN_RUNNING 2 STATE_OPEN_SUSPENDED 3 STATE_OPEN_FAULTED 4 STATE_CLOSED_PENDING_CANCEL 5 STATE_CLOSED_COMPLETED 6 STATE_CLOSED_FAULTED 7 STATE_CLOSED_CANCELLED 8 STATE_CLOSED_ABORTED 9 STATE_CLOSED_STALE 10 STATE_CLOSED_ROLLED_BACK DLV_MESSAGE States State Description 0 STATE_UNRESOLVED 1 STATE_RESOLVED 2 STATE_HANDLED 3 STATE_CANCELLED 4 STATE_MAX_RECOVERED Since now in 11g the Invoke_Messages table is not there so to distinguish between a new message (Invoke) and callback (DLV) and there is DLV_TYPE column that defines the type of message: DLV_TYPE States State Description 1 Invoke Message 2 DLV Message MEDIATOR_INSTANCE STATE Description  0  No faults but there still might be running instances  1  At least one case is aborted by user  2  At least one case is faulted (non-recoverable)  3  At least one case is faulted and one case is aborted  4  At least one case is in recovery required state  5 At least one case is in recovery required state and at least one is aborted  6 At least one case is in recovery required state and at least one is faulted  7 At least one case is in recovery required state, one faulted and one aborted  >=8 and < 16  Running >= 16   Stale In my next blog posting I will walk through the lifecycle of a BPEL process using the above states for the following use cases: - New BPEL process - initial Receive activity - Callback BPEL process - mid-level Receive activity As always comments and questions welcome! Deepak

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • dependency analysis from C# code thru to database tables/columns

    - by fpdave
    I'm looking for a tool to do system wide dependency analysis in C# code and SQL-Server databases. Its looking like the only tool available that does this might be CAST (cast software), which is expensive and it does lots more besides that I dont really need. c# code thru to database column dependency would be hugely useful for many reasons, including: - determining effects of database changes throughout the system - seeing hot spots in the database schema - finding dead stored procedures/tables/etc - understanding the existing code base does anyone know of any such tools?

    Read the article

  • DB2 Integrity Checks and Exception Tables

    - by imthefirestartr
    I am working on planning a migration of a DB2 8.1 database from a horrible IBM encoding to UTF-8 to support further languages etc. I am encountering an issue that I am stuck on. A few notes on this migration: We are using db2move to export and load the data and db2look to get the details fo the database (tablespaces, tables, keys etc). We found the loading process worked nicely with db2move import, however, the data takes 7 hours to load and this was unacceptable downtime when we actually complete the conversion on the main database. We are now using db2move load, which is much faster as it seems to simply throw the data in without integrity checks. Which leads to my current issue. After completing the db2move load process, several tables are in a check pending state and require integrity checks. Integrity checks are done via the following: set integrity for . immediate checked This works for most tables, however, some tables give an error: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL3603N Check data processing through the SET INTEGRITY statement has found integrity violation involving a constraint with name "blah.SQL120124110232400". SQLSTATE=23514 The internets tell me that the solution to this issue is to create an exception table based on the actual table and tell the SET INTEGRITY command to send any exceptions to that table (as below): db2 create table blah_EXCEPTION like blah db2 SET INTEGRITY FOR blah IMMEDIATE CHECKED FOR EXCEPTION IN blah USE blah_EXCEPTION NOW, here is the specific issue I am having! The above forces all the rows with issues to the specified exception table. Well that's just super, buuuuuut I can not lose data in this conversion, its simply unacceptable. The internets and IBM has a vague description of sending the violations to the exception tables and then "dealing with the data" that is in the exception table. Unfortunately, I am not clear what this means and I was hoping that some wise individual knows and could help me out and let me know how I can retrieve this data from these tables and place the data in the original/proper table rather than these exception tables. Let me know if you have any questions. Thanks!

    Read the article

  • Windows Azure: Caching

    - by xamlnotes
    I was poking around today and found this great article on caching: http://www.cloudcomputingdevelopment.net/cache-management-with-windows-azure/ Caching is a great way to boost application performance and keep down overhead on a database or file system. Its also great when you have say 3 web roles as shown in this articles Figure 2 that can share the same cache. If one of the roles goes offline then the cache is still there and can be used. You can change out your asp.net caching to use this pretty easy. Its pretty cool. There’s a sample that’s mentioned in the article that shows how to use this. You can download the cache here.

    Read the article

  • Comparing Table Variables with Temporary Tables

    This articles brings a comparison of temporary tables with table variables from SQL Server author, Wayne Sheffield. In includes an in-depth look at the differences between them. SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

  • Installing the Windows Azure Tools for Visual Studio March 2011 and SDK 1.4

    - by Enrique Lima
    Coming from the joys and new features the SDK 1.3 version gave us back in November/December, we are now again at the doors of another update, Version 1.4 To get it, go to the Windows Azure website, the click on the Develop Menu option.  Once there, Click on the Get Tools & SDK button. This will start the download to activate the Web Platform Installer, when you review the information on it, you get this. Click Install. And Accept the EULA. Installation starts at this point. And you are finished. More to come on the changes this addresses.

    Read the article

  • SQL SERVER – Reseting Identity Values for All Tables

    - by pinaldave
    Sometime email requesting help generates more questions than the motivation to answer them. Let us go over one of the such examples. I have converted the complete email conversation to chat format for easy consumption. I almost got a headache after around 20 email exchange. I am sure if you can read it and feel my pain. DBA: “I deleted all of the data from my database and now it contains table structure only. However, when I tried to insert new data in my tables I noticed that my identity values starts from the same number where they actually were before I deleted the data.” Pinal: “How did you delete the data?” DBA: “Running Delete in Loop?” Pinal: “What was the need of such need?” DBA: “It was my development server and I needed to repopulate the database.” Pinal: “Oh so why did not you use TRUNCATE which would have reset the identity of your table to the original value when the data got deleted? This will work only if you want your database to reset to the original value. If you want to set any other value this may not work.” DBA: (silence for 2 days) DBA: “I did not realize it. Meanwhile I regenerated every table’s schema and dropped the table and re-created it.” Pinal: “Oh no, that would be extremely long and incorrect way. Very bad solution.” DBA: “I understand, should I just take backup of the database before I insert the data and when I need, I can use the original backup to restore the database. This way I will have identity beginning with 1.” Pinal: “This going totally downhill. It is wrong to do so on multiple levels. Did you even read my earlier email about TRUNCATE.” DBA: “Yeah. I found it in spam folder.” Pinal: (I decided to stay silent) DBA: (After 2 days) “Can you provide me script to reseed identity for all of my tables to value 1 without asking further question.” Pinal: USE DATABASE; EXEC sp_MSForEachTable ' IF OBJECTPROPERTY(object_id(''?''), ''TableHasIdentity'') = 1 DBCC CHECKIDENT (''?'', RESEED, 1)' GO Our conversation ended here. If you have directly jumped to this statement, I encourage you to read the conversation one time. There is difference between reseeding identity value to 1 and reseeding it to original value – I will write an another blog post on this subject in future. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Web application and remote storage of files

    - by Matt
    Hi have a web application that can store lots and lots of files on the server. i.e. users upload data to it. The files are stored below a particular storage path. The web host will be an IBM xseries 345. However, the disks are really expensive so we would like to put the files onto a less expensive server. Now here is the question. Should I use an NFS mount on the IBM server of a path on the storage server? Or should I write some scripts to upload the files to the storage server instead. Both the storage server and the web host are on the same network. Only the web server is visible to the world. Is NFS performance suitable for an expected low to moderately loaded server?

    Read the article

  • Visual Studio 2010 & Windows Azure Launch

    If youre involved in any capacity with software development, or want to understand more about cloud computing, this is a half-day event not to be missed. Come along to the official New Zealand launch of Visual Studio 2010 and Windows Azure. Weve lined up two international experts, Sam Guckenheimer and David Chappell to deliver our two keynote sessions. Plus, to mark the occasion, were producing a very cool retro t-shirt for all attendees,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • "Bad response to Storage command" when scheduling job with Bacula

    - by Joril
    I have a Bacula setup with 9 clients, and it's working happily. Today I had to add another client, so I went and copied+adapted the existing configuration files from another client, but when I schedule a job for the new client, I get these errors: 20-Mar 17:50 tools-dir JobId 39: Start Backup JobId 39, Job=BackupPresenze2.2012-03-20_17.50.49_04 20-Mar 17:50 tools-dir JobId 39: Using Device "FileStorage" 20-Mar 17:50 presenze2-fd JobId 39: Fatal error: Failed to connect to Storage daemon: bacula.mylan.local:9103 20-Mar 17:50 tools-dir JobId 39: Fatal error: Bad response to Storage command: wanted 2000 OK storage , got 2902 Bad storage From the client I can telnet to bacula.mylan.local:9103 just fine, and jobs for other clients work successfully... What could I check? (Server and client run Ubuntu 10.04, if it's relevant)

    Read the article

  • Windows Azure RoleEntryPoint Method Call Order

    - by kaleidoscope
    Worker Role Call Order: WaWorkerHost process is started. Worker Role assembly is loaded and surfed for a class that derives from RoleEntryPoint.  This class is instantiated. RoleEntryPoint.OnStart() is called. RoleEntryPoint.Run() is called.  If the RoleEntryPoint.Run() method exits, the RoleEntryPoint.OnStop() method is called . WaWorkerHost process is stopped. The role will recycle and startup again. Web Role Call Order: WaWebHost process is started. Hostable Web Core is activated. Web role assembly is loaded and RoleEntryPoint.OnStart() is called. Global.Application_Start() is called. The web application runs Global.Application_End() is called. RoleEntryPoint.OnStop() is called. Hostable Web Core is deactivated. WaWebHost process is stopped. For Further Reference: http://blogs.msdn.com/jnak/archive/2010/02/11/windows-azure-roleentrypoint-method-call-order.aspx   Tinu, O

    Read the article

  • Look-up Tables in SQL

    Lookup tables can be a force for good in a relational database. Whereas the 'One True Lookup Table' remains a classic of bad database design, an auxiliary table that holds static data, and is used to lookup values, still has powerful magic. Joe Celko explains.... NEW! SQL Monitor 2.0Monitor SQL Server Central's servers withRed Gate's new SQL Monitor.No installation required. Find out more.

    Read the article

  • Alter Index All Tables

    - by Derek Dieter
    This script comes in handy when needing to alter all indexes in a database and rebuild them. This will only work on SQL Server 2005+. It utilizes the ALL keyword in the Alter index statement to rebuild all the indexes for a particular table. This script retrieves all base tables and stores [...]

    Read the article

  • Dev Connections Azure Tutorial

    I am more than a little tardy with this blog post but the link for the tutorial code can be found here: http://www.dasblonde.net/downloads/windowsazureessentialslaunch042010.zip If you had already downloaded the code from the link specified in my tutorial slides, that link (and this one) are both updated with some new stuff. If you attended my similar tutorial in Norway, there are updates to the scripts here that you might be interested in. I created some PowerShell scripts to delete all Windows Azure...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Microsoft Azure Diagnostics Part 1: Introduction

    Having a well thought-out plan for diagnostic data is important for on-premises applications, but it is arguably more important for distributed, highly scalable cloud applications. Michael Collier has provided a clear introduction to Microsoft Azure Diagnostics, including the Diagnostics Agent and how to extract the data. 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

  • Microsoft Sync. Framework with Azure on iOS

    - by Richard Jones
    A bit of a revelation this evening. I discovered something obvious, but missing from my understanding of the brilliant iOS example that ships with the Sync. Framework 4.0CTP It seems that on the server side if a record is edited, correctly only the fields that are modified gets sent down to your device (in my case an iPad). I was previously just blindly assuming that I'd get all fields down. I modified my Xcode population code (based on iOS sample) as follows: + (void)populateQCItems: (id)dict withMetadata:(id)metadata withContext:(NSManagedObjectContext*) context { QCItems *item = (QCItems *)[Utils populateOfflineEntity:dict withMetadata:metadata withContext:context]; if (item != nil) // modify new or existing live item { if ([dict valueForKey:@"Identifier"]) // new bit item.Identifier = [dict valueForKey:@"Identifier"]; if ([dict valueForKey:@"InspectionTypeID"]) // new bit item.InspectionTypeID = [dict valueForKey:@"InspectionTypeID"]; [item logEntity]; } } I hope this helps someone else; as I learnt this the hard way. Technorati Tags: Xcode, iOS, Azure, Sync Framework, Cloud

    Read the article

  • Dev Connections Azure Tutorial

    I am more than a little tardy with this blog post but the link for the tutorial code can be found here: http://www.dasblonde.net/downloads/windowsazureessentialslaunch042010.zip If you had already downloaded the code from the link specified in my tutorial slides, that link (and this one) are both updated with some new stuff. If you attended my similar tutorial in Norway, there are updates to the scripts here that you might be interested in. I created some PowerShell scripts to delete all Windows Azure...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How can I set up a dual-site Storage Daemon in Bacula (mirror the backup)

    - by Andy
    On site A, I have sucessfully set up a bacula director on one host, several File Daemons on the hosts I want to backup, and finally one Storage Daemon where the backup actually is stored. If disaster struck the building Site A, I want a second Storage Daemon on another site, Site B. The Filesets, Director etc would be the same, except the jobs will be stored on the other Storage Daemon as well. Are there any best practises on this?

    Read the article

  • Temporary Tables in Oracle and SQL Server

    Jonathan Lewis (Oracle Ace Director, OakTable Network) and Grant Fritchey (Microsoft SQL Server MVP) will host a live discussion on Oracle and SQL Server, this time in relation to temporary tables. NEW! Deployment Manager Early Access ReleaseDeploy SQL Server changes and .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try the Early Access Release to get a 20% discount on Version 1. Download the Early Access Release.

    Read the article

  • Webinar: Temporary Tables in Oracle and SQL Server

    Once again Jonathan Lewis (Oracle Ace Director, OakTable Network) and Grant Fritchey (Microsoft SQL Server MVP) will host a live discussion on Oracle and SQL Server, this time in relation to temporary tables. Will they agree on some common ground? Or will it be an out and out argument? Either way, be prepared for a lively exchange that will not only entertain, but will teach you key concepts on Oracle and SQL Server.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >