Search Results

Search found 45875 results on 1835 pages for 'george entenman name'.

Page 338/1835 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Adding attachments to HumanTasks *beforehand*

    - by ccasares
    For an demo I'm preparing along with a partner, we need to add some attachments to a HumanTask beforehand, that is, the attachment must be associated already to the Task by the time the user opens its Form. How to achieve this?, indeed it's quite simple and just a matter of some mappings to the Task's input execData structure. Oracle BPM supports "default" attachments (which use BPM tables) or UCM-based ones. The way to insert attachments for both methods is pretty similar. With default attachments When using default attachments, first we need to have the attachment payload as part of the BPM process, that is, must be contained in a variable. Normally the attachment content is binary, so we'll need first to convert it to a base64-string (not covered on this blog entry). What we need to do is just to map the following execData parameters as part of the input of the HumanTask: execData.attachment[n].content            <-- the base64 payload data execData.attachment[n].mimeType           <-- depends on your attachment                                               (e.g.: "application/pdf") execData.attachment[n].name               <-- attachment name (just the name you want to                                               use. No need to be the original filename) execData.attachment[n].attachmentScope    <-- BPM or TASK (depending on your needs) execData.attachment[n].storageType        <-- TASK execData.attachment[n].doesBelongToParent <-- false (not sure if this one is really                                               needed, but it definitely doesn't hurt) execData.attachment[n].updatedBy          <-- username who is attaching it execData.attachment[n].updatedDate        <-- dateTime of when this attachment is                                               attached  Bear in mind that the attachment structure is a repetitive one. So if you need to add more than one attachment, you'll need to use XSLT mapping. If not, the Assign mapper automatically adds [1] for the iteration.  With UCM-based attachments With UCM-based attachments, the procedure is basically the same. We'll need to map some extra fields and not to map others. The tricky part with UCM-based attachments is what we need to know beforehand about the attachment itself. Of course, we don't need to have the payload, but a couple of information from the attachment that must be checked in already in UCM. First, let's see the mappings: execData.attachment[n].mimeType           <-- Document's dFormat attribute (1) execData.attachment[n].name               <-- attachment name (just the name you want to                                               use. No need to be the original filename) execData.attachment[n].attachmentScope    <-- BPM or TASK (depending on your needs) execData.attachment[n].storageType        <-- UCM execData.attachment[n].doesBelongToParent <-- false (not sure if this one is really                                               needed, but it definitely doesn't hurt) execData.attachment[n].updatedBy          <-- username who is attaching it execData.attachment[n].updatedDate        <-- dateTime of when this attachment is                                               attached  execData.attachment[n].uri                <-- "ecm://<dID>" where dID is document's dID                                      attribute (2) execData.attachment[n].ucmDocType         <-- Document's dDocType attribute (3) execData.attachment[n].securityGroup      <-- Document's dSecurityGroup attribute (4) execData.attachment[n].revision           <-- Document's dRevisionID attribute (5) execData.attachment[n].ucmMetadataItem[1].name  <-- "DocUrl" execData.attachment[n].ucmMetadataItem[1].type  <-- STRING execData.attachment[n].ucmMetadataItem[1].value <-- Document's url attribute (6)  Where to get those (n) fields? In my case I get those from a Search call to UCM (not covered on this blog entry) As I mentioned above, we must know which UCM document we're going to attach. We may know its ID, its name... whatever we need to uniquely identify it calling the IDC Search method. This method returns ALL the info we need to attach the different fields labeled with a number above.  The only tricky one is (6). UCM Search service returns the url attribute as a context-root without hostname:port. E.g.: /cs/groups/public/documents/document/dgvs/mdaw/~edisp/ccasareswcptel000239.pdf However we do need to include the full qualified URL when mapping (6). Where to get the http://<hostname>:<port> value? Honestly, I have no clue. What I use to do is to use a BPM property that can always be modified at runtime if needed. There are some other fields that might be needed in the execData.attachment structure, like account (if UCM's is using Accounts). But for demos I've never needed to use them, so I'm not sure whether it's necessary or not. Feel free to add some comments to this entry if you know it ;-)  That's all folks. Should you need help with the UCM Search service, let me know and I can write a quick entry on that topic.

    Read the article

  • Running OpenStack Icehouse with ZFS Storage Appliance

    - by Ronen Kofman
    Couple of months ago Oracle announced the support for OpenStack Cinder plugin with ZFS Storage Appliance (aka ZFSSA).  With our recent release of the Icehouse tech preview I thought it is a good opportunity to demonstrate the ZFSSA plugin working with Icehouse. One thing that helps a lot to get started with ZFSSA is that it has a VirtualBox simulator. This simulator allows users to try out the appliance’s features before getting to a real box. Users can test the functionality and design an environment even before they have a real appliance which makes the deployment process much more efficient. With OpenStack this is especially nice because having a simulator on the other end allows us to test the complete set of the Cinder plugin and check the entire integration on a single server or even a laptop. Let’s see how this works Installing and Configuring the Simulator To get started we first need to download the simulator, the simulator is available here, unzip it and it is ready to be imported to VirtualBox. If you do not already have VirtualBox installed you can download it from here according to your platform of choice. To import the simulator go to VirtualBox console File -> Import Appliance , navigate to the location of the simulator and import the virtual machine. When opening the virtual machine you will need to make the following changes: - Network – by default the network is “Host Only” , the user needs to change that to “Bridged” so the VM can connect to the network and be accessible. - Memory (optional) – the VM comes with a default of 2560MB which may be fine but if you have more memory that could not hurt, in my case I decided to give it 8192 - vCPU (optional) – the default the VM comes with 1 vCPU, I decided to change it to two, you are welcome to do so too. And here is how the VM looks like: Start the VM, when the boot process completes we will need to change the root password and the simulator is running and ready to go. Now that the simulator is up and running we can access simulated appliance using the URL https://<IP or DNS name>:215/, the IP is showing on the virtual machine console. At this stage we will need to configure the appliance, in my case I did not change any of the default (in other words pressed ‘commit’ several times) and the simulated appliance was configured and ready to go. We will need to enable REST access otherwise Cinder will not be able to call the appliance we do that in Configuration->Services and at the end of the page there is ‘REST’ button, enable it. If you are a more advanced user you can set additional features in the appliance but for the purpose of this demo this is sufficient. One final step will be to create a pool, go to Configuration -> Storage and add a pool as shown below the pool is named “default”: The simulator is now running, configured and ready for action. Configuring Cinder Back to OpenStack, I have a multi node deployment which we created according to the “Getting Started with Oracle VM, Oracle Linux and OpenStack” guide using Icehouse tech preview release. Now we need to install and configure the ZFSSA Cinder plugin using the README file. In short the steps are as follows: 1. Copy the file from here to the control node and place them at: /usr/lib/python2.6/site-packages/cinder/volume/drivers/zfssa 2. Configure the plugin, editing /etc/cinder/cinder.conf # Driver to use for volume creation (string value) #volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_driver=cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver zfssa_host = <HOST IP> zfssa_auth_user = root zfssa_auth_password = <ROOT PASSWORD> zfssa_pool = default zfssa_target_portal = <HOST IP>:3260 zfssa_project = test zfssa_initiator_group = default zfssa_target_interfaces = e1000g0 3. Restart the cinder-volume service: service openstack-cinder-volume restart 4. Look into the log file, this will tell us if everything works well so far. If you see any errors fix them before continuing. 5. Install iscsi-initiator-utils package, this is important since the plugin uses iscsi commands from this package: yum install -y iscsi-initiator-utils The installation and configuration are very simple, we do not need to have a “project” in the ZFSSA but we do need to define a pool. Creating and Using Volumes in OpenStack We are now ready to work, to get started lets create a volume in OpenStack and see it showing up on the simulator: #  cinder create 2 --display-name my-volume-1 +---------------------+--------------------------------------+ |       Property      |                Value                 | +---------------------+--------------------------------------+ |     attachments     |                  []                  | |  availability_zone  |                 nova                 | |       bootable      |                false                 | |      created_at     |      2014-08-12T04:24:37.806752      | | display_description |                 None                 | |     display_name    |             my-volume-1              | |      encrypted      |                False                 | |          id         | df67c447-9a36-4887-a8ff-74178d5d06ee | |       metadata      |                  {}                  | |         size        |                  2                   | |     snapshot_id     |                 None                 | |     source_volid    |                 None                 | |        status       |               creating               | |     volume_type     |                 None                 | +---------------------+--------------------------------------+ In the simulator: Extending the volume to 5G: # cinder extend df67c447-9a36-4887-a8ff-74178d5d06ee 5 In the simulator: Creating templates using Cinder Volumes By default OpenStack supports ephemeral storage where an image is copied into the run area during instance launch and deleted when the instance is terminated. With Cinder we can create persistent storage and launch instances from a Cinder volume. Booting from volume has several advantages, one of the main advantages of booting from volumes is speed. No matter how large the volume is the launch operation is immediate there is no copying of an image to a run areas, an operation which can take a long time when using ephemeral storage (depending on image size). In this deployment we have a Glance image of Oracle Linux 6.5, I would like to make it into a volume which I can boot from. When creating a volume from an image we actually “download” the image into the volume and making the volume bootable, this process can take some time depending on the image size, during the download we will see the following status: # cinder create --image-id 487a0731-599a-499e-b0e2-5d9b20201f0f --display-name ol65 2 # cinder list +--------------------------------------+-------------+--------------+------+-------------+ |                  ID                  |    Status   | Display Name | Size | Volume Type | … +--------------------------------------+-------------+--------------+------+------------- | df67c447-9a36-4887-a8ff-74178d5d06ee |  available  | my-volume-1  |  5   |     None    | … | f61702b6-4204-4f10-8bdf-7da792f15c28 | downloading |     ol65     |  2   |     None    | … +--------------------------------------+-------------+--------------+------+-------------+ After the download is complete we will see that the volume status changed to “available” and that the bootable state is “true”. We can use this new volume to boot an instance from or we can use it as a template. Cinder can create a volume from another volume and ZFSSA can replicate volumes instantly in the back end. The result is an efficient template model where users can spawn an instance from a “template” instantly even if the template is very large in size. Let’s try replicating the bootable volume with the Oracle Linux 6.5 on it creating additional 3 bootable volumes: # cinder create 2 --source-volid f61702b6-4204-4f10-8bdf-7da792f15c28 --display-name ol65-bootable-1 # cinder create 2 --source-volid f61702b6-4204-4f10-8bdf-7da792f15c28 --display-name ol65-bootable-2 # cinder create 2 --source-volid f61702b6-4204-4f10-8bdf-7da792f15c28 --display-name ol65-bootable-3 # cinder list +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ |                  ID                  |   Status  |   Display Name  | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | 9bfe0deb-b9c7-4d97-8522-1354fc533c26 | available | ol65-bootable-2 |  2   |     None    |   true   |             | | a311a855-6fb8-472d-b091-4d9703ef6b9a | available | ol65-bootable-1 |  2   |     None    |   true   |             | | df67c447-9a36-4887-a8ff-74178d5d06ee | available |   my-volume-1   |  5   |     None    |  false   |             | | e7fbd2eb-e726-452b-9a88-b5eee0736175 | available | ol65-bootable-3 |  2   |     None    |   true   |             | | f61702b6-4204-4f10-8bdf-7da792f15c28 | available |       ol65      |  2   |     None    |   true   |             | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ Note that the creation of those 3 volume was almost immediate, no need to download or copy, ZFSSA takes care of the volume copy for us. Start 3 instances: # nova boot --boot-volume a311a855-6fb8-472d-b091-4d9703ef6b9a --flavor m1.tiny ol65-instance-1 --nic net-id=25b19746-3aea-4236-8193-4c6284e76eca # nova boot --boot-volume 9bfe0deb-b9c7-4d97-8522-1354fc533c26 --flavor m1.tiny ol65-instance-2 --nic net-id=25b19746-3aea-4236-8193-4c6284e76eca # nova boot --boot-volume e7fbd2eb-e726-452b-9a88-b5eee0736175 --flavor m1.tiny ol65-instance-3 --nic net-id=25b19746-3aea-4236-8193-4c6284e76eca Instantly replicating volumes is a very powerful feature, especially for large templates. The ZFSSA Cinder plugin allows us to take advantage of this feature of ZFSSA. By offloading some of the operations to the array OpenStack create a highly efficient environment where persistent volume can be instantly created from a template. That’s all for now, with this environment you can continue to test ZFSSA with OpenStack and when you are ready for the real appliance the operations will look the same. @RonenKofman

    Read the article

  • Connecting SceneBuilder edited FXML to Java code

    - by daniel
    Recently I had to answer several questions regarding how to connect an UI built with the JavaFX SceneBuilder 1.0 Developer Preview to Java Code. So I figured out that a short overview might be helpful. But first, let me state the obvious. What is FXML? To make it short, FXML is an XML based declaration format for JavaFX. JavaFX provides an FXML loader which will parse FXML files and from that construct a graph of Java object. It may sound complex when stated like that but it is actually quite simple. Here is an example of FXML file, which instantiate a StackPane and puts a Button inside it: -- <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.paint.*?> <StackPane prefHeight="150.0" prefWidth="200.0" xmlns:fx="http://javafx.com/fxml"> <children> <Button mnemonicParsing="false" text="Button" /> </children> </StackPane> ... and here is the code I would have had to write if I had chosen to do the same thing programatically: import javafx.scene.control.*; import javafx.scene.layout.*; ... final Button button = new Button("Button"); button.setMnemonicParsing(false); final StackPane stackPane = new StackPane(); stackPane.setPrefWidth(200.0); stackPane.setPrefHeight(150.0); stacPane.getChildren().add(button); As you can see - FXML is rather simple to understand - as it is quite close to the JavaFX API. So OK FXML is simple, but why would I use it?Well, there are several answers to that - but my own favorite is: because you can make it with SceneBuilder. What is SceneBuilder? In short SceneBuilder is a layout tool that will let you graphically build JavaFX user interfaces by dragging and dropping JavaFX components from a library, and save it as an FXML file. SceneBuilder can also be used to load and modify JavaFX scenegraphs declared in FXML. Here is how I made the small FXML file above: Start the JavaFX SceneBuilder 1.0 Developer Preview In the Library on the left hand side, click on 'StackPane' and drag it on the content view (the white rectangle) In the Library, select a Button and drag it onto the StackPane on the content view. In the Hierarchy Panel on the left hand side - select the StackPane component, then invoke 'Edit > Trim To Selected' from the menubar That's it - you can now save, and you will obtain the small FXML file shown above. Of course this is only a trivial sample, made for the sake of the example - and SceneBuilder will let you create much more complex UIs. So, I have now an FXML file. But what do I do with it? How do I include it in my program? How do I write my main class? Loading an FXML file with JavaFX Well, that's the easy part - because the piece of code you need to write never changes. You can download and look at the SceneBuilder samples if you need to get convinced, but here is the short version: Create a Java class (let's call it 'Main.java') which extends javafx.application.Application In the same directory copy/save the FXML file you just created using SceneBuilder. Let's name it "simple.fxml" Now here is the Java code for the Main class, which simply loads the FXML file and puts it as root in a stage's scene. /* * Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. */ package simple; import java.util.logging.Level; import java.util.logging.Logger; import javafx.application.Application; import javafx.fxml.FXMLLoader; import javafx.scene.Scene; import javafx.scene.layout.StackPane; import javafx.stage.Stage; public class Main extends Application { /** * @param args the command line arguments */ public static void main(String[] args) { Application.launch(Main.class, (java.lang.String[])null); } @Override public void start(Stage primaryStage) { try { StackPane page = (StackPane) FXMLLoader.load(Main.class.getResource("simple.fxml")); Scene scene = new Scene(page); primaryStage.setScene(scene); primaryStage.setTitle("FXML is Simple"); primaryStage.show(); } catch (Exception ex) { Logger.getLogger(Main.class.getName()).log(Level.SEVERE, null, ex); } } } Great! Now I only have to use my favorite IDE to compile the class and run it. But... wait... what does it do? Well nothing. It just displays a button in the middle of a window. There's no logic attached to it. So how do we do that? How can I connect this button to my application logic? Here is how: Connection to code First let's define our application logic. Since this post is only intended to give a very brief overview - let's keep things simple. Let's say that the only thing I want to do is print a message on System.out when the user clicks on my button. To do that, I'll need to register an action handler with my button. And to do that, I'll need to somehow get a handle on my button. I'll need some kind of controller logic that will get my button and add my action handler to it. So how do I get a handle to my button and pass it to my controller? Once again - this is easy: I just need to write a controller class for my FXML. With each FXML file, it is possible to associate a controller class defined for that FXML. That controller class will make the link between the UI (the objects defined in the FXML) and the application logic. To each object defined in FXML we can associate an fx:id. The value of the id must be unique within the scope of the FXML, and is the name of an instance variable inside the controller class, in which the object will be injected. Since I want to have access to my button, I will need to add an fx:id to my button in FXML, and declare an @FXML variable in my controller class with the same name. In other words - I will need to add fx:id="myButton" to my button in FXML: -- <Button fx:id="myButton" mnemonicParsing="false" text="Button" /> and declare @FXML private Button myButton in my controller class @FXML private Button myButton; // value will be injected by the FXMLLoader Let's see how to do this. Add an fx:id to the Button object Load "simple.fxml" in SceneBuilder - if not already done In the hierarchy panel (bottom left), or directly on the content view, select the Button object. Open the Properties sections of the inspector (right panel) for the button object At the top of the section, you will see a text field labelled fx:id. Enter myButton in that field and validate. Associate a controller class with the FXML file Still in SceneBuilder, select the top root object (in our case, that's the StackPane), and open the Code section of the inspector (right hand side) At the top of the section you should see a text field labelled Controller Class. In the field, type simple.SimpleController. This is the name of the class we're going to create manually. If you save at this point, the FXML will look like this: -- <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.paint.*?> <StackPane prefHeight="150.0" prefWidth="200.0" xmlns:fx="http://javafx.com/fxml" fx:controller="simple.SimpleController"> <children> <Button fx:id="myButton" mnemonicParsing="false" text="Button" /> </children> </StackPane> As you can see, the name of the controller class has been added to the root object: fx:controller="simple.SimpleController" Coding the controller class In your favorite IDE, create an empty SimpleController.java class. Now what does a controller class looks like? What should we put inside? Well - SceneBuilder will help you there: it will show you an example of controller skeleton tailored for your FXML. In the menu bar, invoke View > Show Sample Controller Skeleton. A popup appears, displaying a suggestion for the controller skeleton: copy the code displayed there, and paste it into your SimpleController.java: /** * Sample Skeleton for "simple.fxml" Controller Class * Use copy/paste to copy paste this code into your favorite IDE **/ package simple; import java.net.URL; import java.util.ResourceBundle; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.control.Button; public class SimpleController implements Initializable { @FXML // fx:id="myButton" private Button myButton; // Value injected by FXMLLoader @Override // This method is called by the FXMLLoader when initialization is complete public void initialize(URL fxmlFileLocation, ResourceBundle resources) { assert myButton != null : "fx:id=\"myButton\" was not injected: check your FXML file 'simple.fxml'."; // initialize your logic here: all @FXML variables will have been injected } } Note that the code displayed by SceneBuilder is there only for educational purpose: SceneBuilder does not create and does not modify Java files. This is simply a hint of what you can use, given the fx:id present in your FXML file. You are free to copy all or part of the displayed code and paste it into your own Java class. Now at this point, there only remains to add our logic to the controller class. Quite easy: in the initialize method, I will register an action handler with my button: () { @Override public void handle(ActionEvent event) { System.out.println("That was easy, wasn't it?"); } }); ... -- ... // initialize your logic here: all @FXML variables will have been injected myButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { System.out.println("That was easy, wasn't it?"); } }); ... That's it - if you now compile everything in your IDE, and run your application, clicking on the button should print a message on the console! Summary What happens is that in Main.java, the FXMLLoader will load simple.fxml from the jar/classpath, as specified by 'FXMLLoader.load(Main.class.getResource("simple.fxml"))'. When loading simple.fxml, the loader will find the name of the controller class, as specified by 'fx:controller="simple.SimpleController"' in the FXML. Upon finding the name of the controller class, the loader will create an instance of that class, in which it will try to inject all the objects that have an fx:id in the FXML. Thus, after having created '<Button fx:id="myButton" ... />', the FXMLLoader will inject the button instance into the '@FXML private Button myButton;' instance variable found on the controller instance. This is because The instance variable has an @FXML annotation, The name of the variable exactly matches the value of the fx:id Finally, when the whole FXML has been loaded, the FXMLLoader will call the controller's initialize method, and our code that registers an action handler with the button will be executed. For a complete example, take a look at the HelloWorld SceneBuilder sample. Also make sure to follow the SceneBuilder Get Started guide, which will guide you through a much more complete example. Of course, there are more elegant ways to set up an Event Handler using FXML and SceneBuilder. There are also many different ways to work with the FXMLLoader. But since it's starting to be very late here, I think it will have to wait for another post. I hope you have enjoyed the tour! --daniel

    Read the article

  • CreationName for SSIS 2008 and adding components programmatically

    If you are building SSIS 2008 packages programmatically and adding data flow components, you will probably need to know the creation name of the component to add. I can never find a handy reference when I need one, hence this rather mundane post. See also CreationName for SSS 2005. We start with a very simple snippet for adding a component: // Add the Data Flow Task package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = package.Executables[0] as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add OLE-DB source component - ** This is where we need the creation name ** IDTSComponentMetaData90 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "OLEDBSource"; componentSource.ComponentClassID = "DTSAdapter.OLEDBSource.2"; So as you can see the creation name for a OLE-DB Source is DTSAdapter.OLEDBSource.2. CreationName Reference  ADO NET Destination Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 ADO NET Source Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Aggregate DTSTransform.Aggregate.2 Audit DTSTransform.Lineage.2 Cache Transform DTSTransform.Cache.1 Character Map DTSTransform.CharacterMap.2 Checksum Konesans.Dts.Pipeline.ChecksumTransform.ChecksumTransform, Konesans.Dts.Pipeline.ChecksumTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Conditional Split DTSTransform.ConditionalSplit.2 Copy Column DTSTransform.CopyMap.2 Data Conversion DTSTransform.DataConvert.2 Data Mining Model Training MSMDPP.PXPipelineProcessDM.2 Data Mining Query MSMDPP.PXPipelineDMQuery.2 DataReader Destination Microsoft.SqlServer.Dts.Pipeline.DataReaderDestinationAdapter, Microsoft.SqlServer.DataReaderDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Derived Column DTSTransform.DerivedColumn.2 Dimension Processing MSMDPP.PXPipelineProcessDimension.2 Excel Destination DTSAdapter.ExcelDestination.2 Excel Source DTSAdapter.ExcelSource.2 Export Column TxFileExtractor.Extractor.2 Flat File Destination DTSAdapter.FlatFileDestination.2 Flat File Source DTSAdapter.FlatFileSource.2 Fuzzy Grouping DTSTransform.GroupDups.2 Fuzzy Lookup DTSTransform.BestMatch.2 Import Column TxFileInserter.Inserter.2 Lookup DTSTransform.Lookup.2 Merge DTSTransform.Merge.2 Merge Join DTSTransform.MergeJoin.2 Multicast DTSTransform.Multicast.2 OLE DB Command DTSTransform.OLEDBCommand.2 OLE DB Destination DTSAdapter.OLEDBDestination.2 OLE DB Source DTSAdapter.OLEDBSource.2 Partition Processing MSMDPP.PXPipelineProcessPartition.2 Percentage Sampling DTSTransform.PctSampling.2 Performance Counters Source DataCollectorTransform.TxPerfCounters.1 Pivot DTSTransform.Pivot.2 Raw File Destination DTSAdapter.RawDestination.2 Raw File Source DTSAdapter.RawSource.2 Recordset Destination DTSAdapter.RecordsetDestination.2 RegexClean Konesans.Dts.Pipeline.RegexClean.RegexClean, Konesans.Dts.Pipeline.RegexClean, Version=2.0.0.0, Culture=neutral, PublicKeyToken=d1abe77e8a21353e Row Count DTSTransform.RowCount.2 Row Count Plus Konesans.Dts.Pipeline.RowCountPlusTransform.RowCountPlusTransform, Konesans.Dts.Pipeline.RowCountPlusTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Number Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Sampling DTSTransform.RowSampling.2 Script Component Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost, Microsoft.SqlServer.TxScript, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Slowly Changing Dimension DTSTransform.SCD.2 Sort DTSTransform.Sort.2 SQL Server Compact Destination Microsoft.SqlServer.Dts.Pipeline.SqlCEDestinationAdapter, Microsoft.SqlServer.SqlCEDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 SQL Server Destination DTSAdapter.SQLServerDestination.2 Term Extraction DTSTransform.TermExtraction.2 Term Lookup DTSTransform.TermLookup.2 Trash Destination Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc TxTopQueries DataCollectorTransform.TxTopQueries.1 Union All DTSTransform.UnionAll.2 Unpivot DTSTransform.UnPivot.2 XML Source Microsoft.SqlServer.Dts.Pipeline.XmlSourceAdapter, Microsoft.SqlServer.XmlSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Here is a simple console program that can be used to enumerate the pipeline components installed on your machine, and dumps out a list of all components like that above. You will need to add a reference to the Microsoft.SQLServer.ManagedDTS assembly. using System; using System.Diagnostics; using Microsoft.SqlServer.Dts.Runtime; public class Program { static void Main(string[] args) { Application application = new Application(); PipelineComponentInfos componentInfos = application.PipelineComponentInfos; foreach (PipelineComponentInfo componentInfo in componentInfos) { Debug.WriteLine(componentInfo.Name + "\t" + componentInfo.CreationName); } Console.Read(); } }

    Read the article

  • Executing legacy MSBuild scripts in TFS 2010 Build

    - by Jakob Ehn
    When upgrading from TFS 2008 to TFS 2010, all builds are “upgraded” in the sense that a build definition with the same name is created, and it uses the UpgradeTemplate  build process template to execute the build. This template basically just runs MSBuild on the existing TFSBuild.proj file. The build definition contains a property called ConfigurationFolderPath that points to the TFSBuild.proj file. So, existing builds will run just fine after upgrade. But what if you want to use the new workflow functionality in TFS 2010 Build, but still have a lot of MSBuild scripts that maybe call custom MSBuild tasks that you don’t have the time to rewrite? Then one option is to keep these MSBuild scrips and call them from a TFS 2010 Build workflow. This can be done using the MSBuild workflow activity that is avaiable in the toolbox in the Team Foundation Build Activities section: This activity wraps the call to MSBuild.exe and has the following parameters: Most of these properties are only relevant when actually compiling projects, for example C# project files. When calling custom MSBuild project files, you should focus on these properties: Property Meaning Example CommandLineArguments Use this to send in/override MSBuild properties in your project “/p:MyProperty=SomeValue” or MSBuildArguments (this will let you define the arguments in the build definition or when queuing the build) LogFile Name of the log file where MSbuild will log the output “MyBuild.log” LogFileDropLocation Location of the log file BuildDetail.DropLocation + “\log” Project The project to execute SourcesDirectory + “\BuildExtensions.targets” ResponseFile The name of the MSBuild response file SourcesDirectory + “\BuildExtensions.rsp” Targets The target(s) to execute New String() {“Target1”, “Target2”} Verbosity Logging verbosity Microsoft.TeamFoundation.Build.Workflow.BuildVerbosity.Normal Integrating with Team Build   If your MSBuild scripts tries to use Team Build tasks, they will most likely fail with the above approach. For example, the following MSBuild project file tries to add a build step using the BuildStep task:   <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Target Name="MyTarget"> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Name="MyBuildStep" Message="My build step executed" Status="Succeeded"></BuildStep> </Target> </Project> When executing this file using the MSBuild activity, calling the MyTarget, it will fail with the following message: The "Microsoft.TeamFoundation.Build.Tasks.BuildStep" task could not be loaded from the assembly \PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll. Could not load file or assembly 'file:///D:\PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll' or one of its dependencies. The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. You can see that the path to the ProcessComponents.dll is incomplete. This is because in the Microsoft.TeamFoundation.Build.targets file the task is referenced using the $(TeamBuildRegPath) property. Also note that the task needs the TeamFounationServerUrl and BuildUri properties. One solution here is to pass these properties in using the Command Line Arguments parameter:   Here we pass in the parameters with the corresponding values from the curent build. The build log shows that the build step has in fact been inserted:   The problem as you probably spted is that the build step is insert at the top of the build log, instead of next to the MSBuild activity call. This is because we are using a legacy team build task (BuildStep), and that is how these are handled in TFS 2010. You can see the same behaviour when running builds that are using the UpgradeTemplate, that cutom build steps shows up at the top of the build log.

    Read the article

  • The Microsoft Ajax Library and Visual Studio Beta 2

    - by Stephen Walther
    Visual Studio 2010 Beta 2 was released this week and one of the first things that I hope you notice is that it no longer contains the latest version of ASP.NET AJAX. What happened? Where did AJAX go? Just like Sting and The Police, just like Phil Collins and Genesis, just like Greg Page and the Wiggles, AJAX has gone out of band! We are starting a solo career. A Name Change First things first. In previous releases, our Ajax framework was named ASP.NET AJAX. We now have changed the name of the framework to the Microsoft Ajax Library. There are two reasons behind this name change. First, the members of the Ajax team got tired of explaining to everyone that our Ajax framework is not tied to the server-side ASP.NET framework. You can use the Microsoft Ajax Library with ASP.NET Web Forms, ASP.NET MVC, PHP, Ruby on RAILS, and even pure HTML applications. Our framework can be used as a client-only framework and having the word ASP.NET in our name was confusing people. Second, it was time to start spelling the word Ajax like everyone else. Notice that the name is the Microsoft Ajax Library and not the Microsoft AJAX library. Originally, Microsoft used upper case AJAX because AJAX originally was an acronym for Asynchronous JavaScript and XML. And, according to Strunk and Wagnell, acronyms should be all uppercase. However, Ajax is one of those words that have migrated from acronym status to “just a word” status. So whenever you hear one of your co-workers talk about ASP.NET AJAX, gently correct your co-worker and say “It is now called the Microsoft Ajax Library.” Why OOB? But why move out-of-band (OOB)? The short answer is that we have had approximately 6 preview releases of the Microsoft Ajax Library over the last year. That’s a lot. We pride ourselves on being agile. Client-side technology evolves quickly. We want to be able to get a preview version of the Microsoft Ajax Library out to our customers, get feedback, and make changes to the library quickly. Shipping the Microsoft Ajax Library out-of-band keeps us agile and enables us to continue to ship new versions of the library even after ASP.NET 4 ships. Showing Love for JavaScript Developers One area in which we have received a lot of feedback is around making the Microsoft Ajax Library easier to use for developers who are comfortable with JavaScript. We also wanted to make it easy for jQuery developers to take advantage of the innovative features of the Microsoft Ajax Library. To achieve these goals, we’ve added the following features to the Microsoft Ajax Library (these features are included in the latest preview release that you can download right now): A simplified imperative syntax – We wanted to make it brain-dead simple to create client-side Ajax controls when writing JavaScript. A client script loader – We wanted the Microsoft Ajax Library to load all of the scripts required by a component or control automatically. jQuery integration – We love the jQuery selector syntax. We wanted to make it easy for jQuery developers to use the Microsoft Ajax Library without changing their programming style. If you are interested in learning about these new features of the Microsoft Ajax Library, I recommend that you read the following blog post by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2009/10/15/announcing-microsoft-ajax-library-preview-6-and-the-microsoft-ajax-minifier.aspx Downloading the Latest Version of the Microsoft Ajax Library Currently, the best place to download the latest version of the Microsoft Ajax Library is directly from the ASP.NET CodePlex project: http://aspnet.codeplex.com/ As I write this, the current version is Preview 6. The next version is coming out at the PDC. Summary I’m really excited about the future of the Microsoft Ajax Library. Moving outside of the ASP.NET framework provides us the flexibility to remain agile and continue to innovate aggressively. The latest preview release of the Microsoft Ajax Library includes several major new features including a client script loader, jQuery integration, and a simplified client control creation syntax.

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Installation

    - by StuartBrierley
    As previously detailed, I have completed a single server installation of BizTalk Server 2009 standard on my development laptop; a MacBook Pro Core2Duo running at 2.16Ghz with 2Gb of RAM.  Following this I also posted on my use of the BizTalk Server Best Practices Anaylser and how to configure the BizTalk SQL Server Jobs.  All of which means that I should have some confidence that I have a decent working BizTalk Server 2009 environment, Next I thought that it would be a good idea to try and get some idea of how this setup performs by carrying out some baseline tests that can then be replicated on the test and live servers. The aim of this would be to allow confident predictions to be made of how any solutions developed on a single "server" installation may be expected to perform when deployed to these multi-server BizTalk Server 2009 standard installations. The BizTalk Benchmark Wizard would seem to be the perfect tool for the job. The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected. This utility should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment: "The executed scenarios may or may not be relative to any realistic scenario, and is only intended for testing. The BizTalk Benchmark Wizard has been developed in relation to the BizTalk Server 2009 Scale Out Testing Study. More information about the study can be found here: http://msdn.microsoft.com/en-us/library/ee377068(BTS.10).aspx" After downloading and installing the wizard you will need set up the Hosts, Instances and Adapter handlers.  This is done by running a script file using the “cscript” detailed below.  To do this you will need to open a command prompt window and navigate to the script folder; assuming the default installation location this would be C:\Program Files\Blogical\BizTalk Benchmark Wizard\Artefacts\BizTalk. In this folder you should find an InstallHosts.vbs file which can be executed using the following parameters: NTGroupName - The name of the Windows NT group. UserName – The name of the user account running the service instances. Password – The password of the user account running the service instances. Receive Host – The name of the server where you want to run the receive host instance.  Send Host - The name of the server where you want to run the sen host instance. Processing Host - The name of the server where you want to run the process host instance. By default the script is set up for 64 bit hosts, so if you are running in 32 bit environment make sure that you change the following line in the script before continuing: from:   objHS.IsHost32BitOnly = False to:    objHS.IsHost32BitOnly = True If you have a single box installation, your script command might look like this: cscript InstallHosts.vbs "BizTalk Application Users" “\MyUser” “MyPassword” “BtsServer1” “BtsServer1” “BtsServer1” If you have a multi server installation, your script command might look like this: cscript InstallHosts.vbs "MyDomain\BizTalk Application Users" “MyDomain\MyUser” “MyPassword” “BtsServer1” “BtsServer2” “BtsServer2” Running this script will create: Three hosts (BBW_RxHost, BBW_TxHost and BBW_PxHost) Three host instances One send and one receive adapter handler for the WCF NetTcp adapter. You will then need to import the BizTalk MSI via the BizTalk Administration Console.  Open the BizTalk Administration Console, point to the “Applications” node and import the BizTalk Benchmark Wizard.msi found in the same folder as the script above. This will create a “BizTalk Benchmark Wizard” application along with all ports and orchestrations needed. To finish the installation you will need to run the BizTalk Benchmark Wizard.msi on all BizTalk servers to add the assemblies to the Global Assembly Cache (GAC). Next I will look at running the BizTalk Benchmark Wizard.

    Read the article

  • OWB 11gR2 - Find and Search Metadata in Designer

    - by David Allan
    Here are some tools and techniques for finding objects, specifically in the design repository. There are ways of navigating and collating objects that are useful for day to day development and build-time usage - this includes features out of the box and utilities constructed on top. There are a variety of techniques to navigate and find objects in the repository, the first 3 are out of the box, the 4th is an expert utility. Navigating by the tree, grouping by project and module - ok if you are aware of the exact module/folder that objects reside in. The structure panel is a useful way of finding parts of an object, especially when large rather than using the canvas. In large scale projects it helps to have accelerators (either find or collections below). Advanced find to search by name - 11gR2 included a find capability specifically for large scale projects. There were improvements in both the tree search and the object editors (including highlighting in mapping for example). So you can now do regular expression based search and quickly navigate to objects within a repository. Collections - logically organize your objects into virtual folders by shortcutting the actual objects. This is useful for a range of things since all the OWB services operate on collections too (export/import, validation, deployment). See the post here for new collection functionality in 11gR2. Reports for searching by type, updated on, updated by etc. Useful for activities such as periodic incremental actions (deploy all mappings changed in the past week). The report style view is useful since I can quickly see who changed what and when. You can see all the audit details for objects within each objects property inspector, but its useful to just get all objects changed today or example, all objects changed since my last build etc. This utility combines both UI extensions via experts and the public views on the repository. In the figure to the right you see the contextual option 'Object Search' which invokes the utility, you can see I have quite a number of modules within my project. Figure out all the potential objects which have been changed is not simple. The utility is an expert which provides this kind of search capability. The utility provides a report of the objects in the design repository which satisfy some filter criteria. The type of criteria includes; objects updated in the last n days optionally filter the objects updated by user filter the user by project and by type (table/mappings etc.) The search dialog appears with these options, you can multi-select the object types, so for example you can select TABLE and MAPPING. Its also possible to search across projects if need be. If you have multiple users using the repository you can define the OWB user name in the 'Updated by' property to restrict the report to just that user also. Finally there is a search name that will be used for some of the options such as building a collection - this name is used for the collection to be built. In the example I have done, I've just searched my project for all process flows and mappings that users have updated in the last 7 days. The results of the query are returned in a table containing the object names, types, full path and audit details. The columns are sort-able, you can sort the results by name, type, path etc. One of the cool things here, is that you can then perform operations on these objects - such as edit them, export single selection or entire results to MDL, create a collection from the results (now you have a saved set of references in the repository, you could do deploy/export etc.), create a deployment script from the results...or even add in your own ideas! You see from this that you can do bulk operations on sets of objects based on search results. So for example selecting the 'Build Collection' option creates a collection with all of the objects from my search, you can subsequently deploy/generate/maintain this collection of objects. Under the hood of the expert if just basic OMB commands from the product and the use of the public views on the design repository. You can see how easy it is to build up macro-like capabilities that will help you do day-to-day as well as build like tasks on sets of objects.

    Read the article

  • Juju stuck in "pending" state when using LXC

    - by Andre
    So I'm trying to get started with Juju, and tried to do this locally using LXC. I followed the instructions here: How do I configure juju for local usage? Unfortunately this doesn't seem to work for me. status shows the following: $ juju status machines: 0: agent-state: running dns-name: localhost instance-id: local instance-state: running services: mysql: charm: cs:precise/mysql-1 relations: db: - wordpress units: mysql/0: agent-state: pending machine: 0 public-address: null wordpress: charm: cs:precise/wordpress-0 exposed: true relations: db: - mysql units: wordpress/0: agent-state: pending machine: 0 open-ports: [] public-address: null 2012-05-10 14:09:38,155 INFO 'status' command finished successfully As you can see the agent-state is 'pending' and there is no public address where I'm able to access the newly created site. Am I missing something here? UPDATE: Tried destroying the environment an doing everything again (multiple times). This is the output for debug-log: ~$ juju debug-log 2012-05-11 08:50:23,790 INFO Enabling distributed debug log. 2012-05-11 08:50:23,806 INFO Tailing logs - Ctrl-C to stop. 2012-05-11 08:50:42,338 Machine:0: juju.agents.machine DEBUG: Units changed old:set([]) new:set(['mysql/0']) 2012-05-11 08:50:42,339 Machine:0: juju.agents.machine DEBUG: Starting service unit: mysql/0 ... 2012-05-11 08:50:42,459 Machine:0: unit.deploy DEBUG: Downloading charm cs:precise/mysql-1 to /home/andre/.juju/data/andre-local/charms 2012-05-11 08:50:42,620 Machine:0: unit.deploy DEBUG: Using <juju.machine.unit.UnitContainerDeployment object at 0x9c54b6c> for mysql/0 in /home/andre/.juju/data/andre-local 2012-05-11 08:50:42,648 Machine:0: unit.deploy DEBUG: Starting service unit mysql/0... 2012-05-11 08:50:42,649 Machine:0: unit.deploy DEBUG: Creating master container... 2012-05-11 08:54:33,992 Machine:0: unit.deploy DEBUG: Created master container andre-local-0-template 2012-05-11 08:54:33,993 Machine:0: unit.deploy INFO: Creating container mysql-0... 2012-05-11 08:56:18,760 Machine:0: unit.deploy INFO: Container created for mysql/0 2012-05-11 08:56:19,466 Machine:0: unit.deploy DEBUG: Charm extracted into container 2012-05-11 08:56:19,569 Machine:0: unit.deploy DEBUG: Starting container... 2012-05-11 08:56:22,707 Machine:0: unit.deploy INFO: Started container for mysql/0 2012-05-11 08:56:22,707 Machine:0: unit.deploy INFO: Started service unit mysql/0 2012-05-11 08:56:23,012 Machine:0: juju.agents.machine DEBUG: Units changed old:set(['mysql/0']) new:set(['wordpress/0', 'mysql/0']) 2012-05-11 08:56:23,039 Machine:0: juju.agents.machine DEBUG: Starting service unit: wordpress/0 ... 2012-05-11 08:56:23,154 Machine:0: unit.deploy DEBUG: Downloading charm cs:precise/wordpress-0 to /home/andre/.juju/data/andre-local/charms 2012-05-11 08:56:23,396 Machine:0: unit.deploy DEBUG: Using <juju.machine.unit.UnitContainerDeployment object at 0x9c519cc> for wordpress/0 in /home/andre/.juju/data/andre-local 2012-05-11 08:56:23,620 Machine:0: unit.deploy DEBUG: Starting service unit wordpress/0... 2012-05-11 08:56:23,621 Machine:0: unit.deploy INFO: Creating container wordpress-0... 2012-05-11 08:58:24,739 Machine:0: unit.deploy INFO: Container created for wordpress/0 2012-05-11 08:58:25,163 Machine:0: unit.deploy DEBUG: Charm extracted into container 2012-05-11 08:58:25,397 Machine:0: unit.deploy DEBUG: Starting container... 2012-05-11 08:58:27,982 Machine:0: unit.deploy INFO: Started container for wordpress/0 2012-05-11 08:58:27,983 Machine:0: unit.deploy INFO: Started service unit wordpress/0 This is the result for the status command (with verbose flag): ~$ juju -v status 2012-05-11 08:51:53,464 DEBUG Initializing juju status runtime 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@662: Client environment:host.name=andre-ufo 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-24-generic-pae 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@671: Client environment:os.version=#37-Ubuntu SMP Wed Apr 25 10:47:59 UTC 2012 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@679: Client environment:user.name=andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@687: Client environment:user.home=/home/andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@699: Client environment:user.dir=/home/andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=192.168.122.1:41779 sessionTimeout=10000 watcher=0xb7780620 sessionId=0 sessionPasswd=<null> context=0x9242ee8 flags=0 2012-05-11 08:51:53,627:4030(0xb6b90b40):ZOO_INFO@check_events@1585: initiated connection to server [192.168.122.1:41779] 2012-05-11 08:51:53,649:4030(0xb6b90b40):ZOO_INFO@check_events@1632: session establishment complete on server [192.168.122.1:41779], sessionId=0x1373ae057d90007, negotiated timeout=10000 2012-05-11 08:51:53,651 DEBUG Environment is initialized. machines: 0: agent-state: running dns-name: localhost instance-id: local instance-state: running services: mysql: charm: cs:precise/mysql-1 relations: db: - wordpress units: mysql/0: agent-state: pending machine: 0 public-address: null wordpress: charm: cs:precise/wordpress-0 relations: db: - mysql units: wordpress/0: agent-state: pending machine: 0 public-address: null

    Read the article

  • Dynamic Class Inheritance For PHP

    - by VirtuosiMedia
    I have a situation where I think I might need dynamic class inheritance in PHP 5.3, but the idea doesn't sit well and I'm looking for a different design pattern to solve my problem if it's possible. Use Case I have a set of DB abstraction layer classes that dynamically compiles SQL queries, with one DAL class for each DB type (MySQL, MsSQL, Oracle, etc.). Each table in the database has its own class that extends the appropriate DAL class. The idea is that you interact with the table classes, but never directly use the DAL class. If you want to support a different DB type for your app, you don't need to rewrite any queries or even any code, you simply change a setting that swaps one DAL class out for another...and that's it. To give you a better idea of how this is used, you can take a look at the DAL class, the table classes, and how they are used on this StackExchange Code Review page. To really understand what I'm trying to do, please take a look at my implementation first before suggesting a solution. Issues The strategy that I had used previously was to have all of the DAL classes share the same class name. This eliminated autoloading, so I had to manually load the appropriate DAL class in a switch statement. However, this approach presents some problems for testing and documentation purposes, so I'd like to find a different way to solve the problem of loading the correct DAL class more elegantly. Update to clarify the issue The problem basically boils down to inconsistencies in the class name (pre-PHP 5.3) or class namespace (PHP 5.3) and its location in the directory structure. At this point, all of my DAL classes have the same name, DBObject, but reside in different folders, MySQL, Oracle, etc. My table classes all extend DBObject, but which DBObject they extend varies depending on which one has been loaded. Basically, I'm trying to have my cake and eat it too. The table classes act as a stable API and extend a dynamic backend, the DAL (DBObject) classes. It works great, but I outsmarted myself and because of the inconsistencies with the class names and their locations, I can't autoload the DBObject, which makes running unit tests and generating API docs impossible for the DBObject classes because the tests and docs rely on auto-loading. Just loading the appropriate DBObject into memory using a factory method won't work because there will be times when I need to load multiple DBObjects for testing. Because the classes currently share a name, this causes a class is already defined error. I can make exceptions for the DBObjects in my test code, obviously, but I'm looking for something a little less hacky as there may future instances where something similar would need to be done. Solutions? Worst case scenario, I can continue my current strategy, but I don't like it very much, especially as I'll soon be converting my code to PHP 5.3. I suspect that I can use some sort of dynamic inheritance via either namespaces (preferred) or a dynamic class extension, but I haven't been able to find good examples of this implemented in the wild. In your answers, please suggest either an alternate pattern that would work for this use case or an example of dynamic inheritance done right. Please assume PHP 5.3 with namespaced code. Any code examples are greatly encouraged. The preferred constraints for the solution are: DAL class can be autoloaded. DAL classes don't share the same exact same namespace, but share the same class name. As an example, I would prefer to use classes named DbObject that use namespaces like Vm\Db\MySql and Vm\Db\Oracle. Table classes don't have to be rewritten with a change in DB type. The appropriate DB type is determined via a single setting only. That setting is the only thing that should need to change to interchange DB types. Ideally, the setting check should occur only once per page load, but I'm flexible on that.

    Read the article

  • Clustering for Mere Mortals (Pt2)

    - by Geoff N. Hiten
    Planning. I could stop there and let that be the entirety post #2 in this series.  Planning is the single most important element in building a cluster and the Laptop Demo Cluster is no exception.  One of the more awkward parts of actually creating a cluster is coordinating information between Windows Clustering and SQL Clustering.  The dialog boxes show up hours apart, but still have to have matching and consistent information. Excel seems to be a good tool for tracking these settings.  My workbook has four pages: Systems, Storage, Network, and Service Accounts.  The systems page looks like this:   Name Role Software Location East Physical Cluster Node 1 Windows Server 2008 R2 Enterprise Laptop VM West Physical Cluster Node 2 Windows Server 2008 R2 Enterprise Laptop VM North Physical Cluster Node 3 (Future Reserved) Windows Server 2008 R2 Enterprise Laptop VM MicroCluster Cluster Management Interface N/A Laptop VM SQL01 High-Performance High-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM SQL02 High-Performance Standard-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM SQL03 Standard-Performance High-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM Note that everything that has a computer name is listed here, whether physical or virtual. Storage looks like this: Storage Name Instance Purpose Volume Path Size (GB) LUN ID Speed Quorum MicroCluster Cluster Quorum Quorum Q: 2     SQL01Anchor SQL01 Instance Anchor SQL01Anchor L: 2     SQL02Anchor SQL02 Instance Anchor SQL02Anchor M: 2     SQL01Data1 SQL01 SQL Data SQL01Data1 L:\MountPoints\SQL01Data1 2     SQL02Data1 SQL02 SQL Data SQL02Data1 M:\MountPoints\SQL02Data1       Starting at the left is the name used in the storage array.  It is important to rename resources at each level, whether it is Storage, LUN, Volume, or disk folder.  Otherwise, troubleshooting things gets complex and difficult.  You want to be able to glance at a resource at any level and see where it comes from and what it is connected to. Networking is the same way:   System Network VLAN  IP Subnet Mask Gateway DNS1 DNS2 East Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 East Heartbeat Cluster2   255.255.255.0       West Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 West Heartbeat Cluster2   255.255.255.0       North Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 North Heartbeat Cluster2   255.255.255.0       SQL01 Public Cluster1 10.97.230.x(DHCP) 255.255.255.0       SQL02 Public Cluster1 10.97.230.x(DHCP) 255.255.255.0       One hallmark of a poorly planned and implemented cluster is a bunch of "Local Network Connection #n" entries in the network settings page.  That lets me know that somebody didn't care about the long-term supportabaility of the cluster.  This can be critically important with Hyper-V Clusters and their high NIC counts.  Final page:   Instance Service Name Account Password Domain OU SQL01 SQL Server SVCSQL01 Baseline22 MicroAD Service Accounts SQL01 SQL Agent SVCSQL01 Baseline22 MicroAD Service Accounts SQL02 SQL Server SVC_SQL02 Baseline22 MicroAD Service Accounts SQL02 SQL Agent SVC_SQL02 Baseline22 MicroAD Service Accounts SQL03 (Future) SQL Server SVC_SQL03 Baseline22 MicroAD Service Accounts SQL03 (Future) SQL Agent SVC_SQL03 Baseline22 MicroAD Service Accounts             Installation Account           administrator            Yes.  I write down the account information.  I secure the file via NTFS, but I don't want to fumble around looking for passwords when it comes time to rebuild a node. Always fill out the workbook COMPLETELY before installing anything.  The whole point is to have everything you need at your fingertips before you begin.  The install experience is so much better and more productive with this information in place.

    Read the article

  • Deploying a SharePoint 2007 theme using Features

    - by Kelly Jones
    I recently had a requirement to update the branding on an existing Windows SharePoint Services (WSS version 3.0) site.  I needed to update the theme, along with the master page.  An additional requirement is that my client likes to have all changes bundled up in SharePoint solutions.  This makes it much easier to move code from dev to test to prod and more importantly, makes it easier to undo code migrations if any issues would arise (I agree with this approach). Updating the theme was easy enough.  I created a new theme, along with a two new features.  The first feature, scoped at the farm level, deploys the theme, adding it to the spthemes.xml file (in the 12 hive –> \Template\layouts\1033 folder).  Here’s the method that I call from the feature activated event: private static void AddThemeToSpThemes(string id, string name, string description, string thumbnail, string preview, SPFeatureReceiverProperties properties) { XmlDocument spThemes = new XmlDocument(); //use GetGenericSetupPath to find the 12 hive folder string spThemesPath = SPUtility.GetGenericSetupPath(@"TEMPLATE\LAYOUTS\1033\spThemes.xml"); //load the spthemes file into our xmldocument, since it is just xml spThemes.Load(spThemesPath); XmlNode root = spThemes.DocumentElement; //search the themes file to see if our theme is already added bool found = false; foreach (XmlNode node in root.ChildNodes) { foreach (XmlNode prop in node.ChildNodes) { if (prop.Name.Equals("TemplateID")) { if (prop.InnerText.Equals(id)) { found = true; break; } } } if (found) { break; } } if (!found) //theme not found, so add it { //This is what we need to add: // <Templates> // <TemplateID>ThemeName</TemplateID> // <DisplayName>Theme Display Name</DisplayName> // <Description>My theme description</Description> // <Thumbnail>images/mythemethumb.gif</Thumbnail> // <Preview>images/mythemepreview.gif</Preview> // </Templates> StringBuilder sb = new StringBuilder(); sb.Append("<Templates><TemplateID>"); sb.Append(id); sb.Append("</TemplateID><DisplayName>"); sb.Append(name); sb.Append("</DisplayName><Description>"); sb.Append(description); sb.Append("</Description><Thumbnail>"); sb.Append(thumbnail); sb.Append("</Thumbnail><Preview>"); sb.Append(preview); sb.Append("</Preview></Templates>"); root.CreateNavigator().AppendChild(sb.ToString()); spThemes.Save(spThemesPath); } } Just as important, is the code that removes the theme when the feature is deactivated: private static void RemoveThemeFromSpThemes(string id) { XmlDocument spThemes = new XmlDocument(); string spThemesPath = HostingEnvironment.MapPath("/_layouts/") + @"1033\spThemes.xml"; spThemes.Load(spThemesPath); XmlNode root = spThemes.DocumentElement; foreach (XmlNode node in root.ChildNodes) { foreach (XmlNode prop in node.ChildNodes) { if (prop.Name.Equals("TemplateID")) { if (prop.InnerText.Equals(id)) { root.RemoveChild(node); spThemes.Save(spThemesPath); break; } } } } } So, that takes care of deploying the theme.  In order to apply the theme to the web, my activate feature method looks like this: public override void FeatureDeactivating(SPFeatureReceiverProperties properties) { using (SPWeb curweb = (SPWeb)properties.Feature.Parent) { curweb.ApplyTheme("myThemeName"); curweb.Update(); } } Deactivating is just as simple: public override void FeatureDeactivating(SPFeatureReceiverProperties properties) { using (SPWeb curweb = (SPWeb)properties.Feature.Parent) { curweb.ApplyTheme("none"); curweb.Update(); } } Ok, that’s the code necessary to deploy, apply, un-apply, and retract the theme.  Also, the solution (WSP file) contains the actual theme files. SO, next is the master page, which I’ll cover in my next blog post.

    Read the article

  • SQL SERVER – Another lesser known feature of SQL Server Management Studio 2012 – Guest Post by Balmukund Lakhani

    - by Pinal Dave
    This is a fantastic blog post from my dear friend Balmukund ( blog | twitter | facebook ). He had presented a fantastic session in our last UG and there were lots of requests from attendees that he blogs about it. Well, here is the blog post about the same very popular UG session. Let us read the entire blog post in the voice of the Balmukund himself. In one of my previous guest blog on SQL Authority, I wrote about “Additional Connection Parameter” tab of login screen in SQL Server Management Studio (a.k.a. SSMS). On the similar lines, this blog is going to show little less known new feature of login main screen (“Connect to Server”) of SSMS 2012. You might have seen below screen countless times and you might wonder what is there is blog about in this simple screen. Well, continue reading and you would get the answer. Many times, DBA have to login to production server from non-regular machine, may be a developer’s workstation. Once you login to SQL, do your work and close the management studio. Do you know that your server name is saved in management studio? Of course, very useful feature because you may not like to type server name/IP address every time. Whatever servers you have connected, it would be stored by management studio. But sometime, it’s annoying! What you would do if you want SQL Server Management Studio to forget “all” the servers listed in drop down of Server name? To do that, you need to know how and where it’s stored. You can use one of my favorite tool from sysinternals called Process Monitor (also known as ProcMon) and easily figure out that this is stored in a file under your windows user profile. Below is the file in SQL 2008 R2 Management Studio. %appdata%\Microsoft\Microsoft SQL Server\100\Tools\Shell\SqlStudio.bin For SQL Server 2012, here is what we can see in ProcMon So, the path is %appdata%\Microsoft\Microsoft SQL Server\110\Tools\Shell\SqlStudio.bin So far, you might wonder, where is the new feature? I have been asked by many users to delete entries from SSMS “Connect to Server” server name list. Well, unofficially, you can delete the file directly which we found via ProcMon. Note that delete file to get rid of server list is not officially supported by Microsoft. Better way to achieve this is provided in SSMS 2012. To delete the servers from the list, highlight the name we want to delete (via keyboard or mouse) and then press delete key via keyboard. We can’t be multi-select and has to be done one by one. We can delete as many entries we want. I have delete few from first screenshot taken and here is the modified version. This is not available in SQL 2008 R2 and its previous version. This came from feedback given to SQL Server Product group. Hope you have learned something new today! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Running Multiple WebLogic and OSB Domains

    - by jeff.x.davies
    I have any number of OSB domain created on my machine at any point in time. For example, I have different domains for different version of Oracle Service Bus and Oracle SOA Suite. I also have different domains for different purposes. I have a demo domain and another domain for the projects in my blog. Starting with OSB 11g and the Apache Derby server, there is a small "gotcha" if you want to create multiple domains on a devevelopment machine. When you create a new domain for OSB 11g it will use the same database info for all databases and this will cause an error when starting the admin server of the second domain (the first domain doesn't have to be running for this error to occur). Here is an example of the error message in the server console: ####<Mar 8, 2011 2:58:48 PM PST> <Critical> <JTA> <jeff-laptop> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1299625128464> <BEA-110482> <A logging last resource failed during initialization. The server cannot boot unless all configured logging last resources (LLRs) initialize. Failing reason: weblogic.transaction.loggingresource.LoggingResourceException: java.sql.SQLException: JDBC LLR, table verify failed for table 'WL_LLR_ADMINSERVER', row 'JDBC LLR Domain//Server' record had unexpected value 'osb11gR1PS3//AdminServer' expected 'OSBCIM//AdminServer'*** ONLY the original domain and server that creates an LLR table may access it *** The solution is to create a database instance for each of your domains and this is very simple to do. After you create a domain using the Configuration Wizard, locate the wlsbjmsrpDataSource-jdbc.xml file that is found under the DOMAIN_HOME/config/jdbc directory. Near the top of the file you will see the following entry: <url>jdbc:derby://localhost:1527/osbexamples;create=true;ServerName=localhost;databaseName=osbexamples</url> You need to modify this entry with a different and unique database name. The easiest way to do this is to substiture the name of your domain. For example: <url>jdbc:derby://localhost:1527/mydomain;create=true;ServerName=localhost;databaseName=mydomain</url> will create a database named mydomain . Now, when you restart the admin server for the domain, it will create the new database for you. Do this for each domain you create on your development machine and you'll have no troubles. The process is much simpler if you are creating a domain using the Configuration Wizard. Simply name the database when you get to the Configure JDBC Component Schema step of the Configuration Wizard, select the OSB JMS Reporting Provider and set the name in the DBMS/Service field to whatever name you like, as shown in Figure 1 below. Figure 1 – Configuring the JDBC Component Schema That is all there is to it. Now you can create as many domains on your leptop or development machine as you like and not have to worry about them conflicting with each other.

    Read the article

  • Two small issues with Windows Phone 7 ApplicationBar buttons (and workaround)

    - by Laurent Bugnion
    When you work with the ApplicationBar in Windows Phone 7, you notice very fast that it is not quite a component like the others. For example, the ApplicationBarIconButton element is not a dependency object, which causes issues because it is not possible to add attached properties to it. Here are two other issues I stumbled upon, and what workaround I used to make it work anyway. Finding a button by name returns null Since the ApplicationBar is not in the tree of the Silverlight page, finding an element by name fails. For example consider the following code: <phoneNavigation:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar> <shell:ApplicationBar.Buttons> <shell:ApplicationBarIconButton IconUri="/Resources/edit.png" Click="EditButtonClick" x:Name="EditButton"/> <shell:ApplicationBarIconButton IconUri="/Resources/cancel.png" Click="CancelButtonClick" x:Name="CancelButton"/> </shell:ApplicationBar.Buttons> </shell:ApplicationBar> </phoneNavigation:PhoneApplicationPage.ApplicationBar> with private void EditButtonClick( object sender, EventArgs e) { CancelButton.IsEnabled = false; // Fails, CancelButton is always null } The CancelButton, even though it is named through an x:Name attribute, and even though it appears in Intellisense in the code behind, is null when it is needed. To solve the issue, I use the following code: public enum IconButton { Edit = 0, Cancel = 1 } public ApplicationBarIconButton GetButton( IconButton which) { return ApplicationBar.Buttons[(int) which] as ApplicationBarIconButton; } private void EditButtonClick( object sender, EventArgs e) { GetButton(IconButton.Cancel).IsEnabled = false; } Updating a Binding when the icon button is clicked In Silverlight, a Binding on a TextBox’s Text property can only be updated in two circumstances: When the TextBox loses the focus. Explicitly by placing a call in code. In WPF, there is a third option, updating the Binding every time that the Text property changes (i.e. every time that the user types a character). Unfortunately this option is not available in Silverlight). To select option 1, 2 (and in WPF, 3), you use the Mode property of the Binding class. The issue here is that pressing a button on the ApplicationBar does not remove the focus from the TextBox where the user is currently typing. If the button is a Save button, this is super annoying: The Binding does not get updated on the data object, the object is saved anyway with the old state, and noone understands what just happened. In order to solve this, you can make sure that the Binding is updated explicitly when the button is pressed, with the following code: private void SaveButtonClick(object sender, EventArgs e) { // Force update binding first var binding = MessageTextBox.GetBindingExpression( TextBox.TextProperty); binding.UpdateSource(); // Property was updated for sure, now we can save var vm = DataContext as MainViewModel; vm.Save(); } Obviously this is less maintainable than the usual way to do things in Silverlight. So be careful when using the ApplicationBar and remember that it is not a Silverlight element like the others!! Happy coding! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Data Connection Library in SharePoint Server 2010

    - by Wayne
    What is a Data Connection Library in SharePoint Server 2010? A Data Connection Library in Microsoft SharePoint Server 2010 is a library that can contain two kinds of data connections: an Office Data Connection (ODC) file or a Universal Data Connection (UDC) file. Microsoft InfoPath 2010 uses data connections that comply with the Universal Data Connection (UDC) file schema and typically have either a *.udcx or *.xml file name extension. Data sources described by these data connections are stored on the server and can be used in standard form templates and browser-enabled form templates. How to create a SharePoint Server Data Connection Library? 1. Browse to a SharePoint Server 2010 site on which you have at least Design permissions. If you are on the root site, create a new site before you continue with the next step. 2. On the Site Actions menu, click More Options. 3. On the Create page, click Library under Filter By, and then click Data Connection Library. 4. On the right side of the Create page, type a name for the library, and then click the Create button. 5. Copy the URL of the new data connection library. How to create a new data connection file in InfoPath? 1. Open InfoPath Designer 2010, click Blank Form, and then click Design Form. 2. On the Data tab, click Data Connections, and then click Add. 3. In the Data Connection Wizard, click Create a new connection to, click Receive data, and then click Next. 4. Click the kind of data source that you are connecting to, such as Database, Web service, or SharePoint library or list, and then click Next. 5. Complete the remaining steps in the Data Connection Wizard to configure your data connection, and then click Finish to return to the Data Connections dialog box. 6. In the Data Connections dialog box, click Convert to Connection File. 7. In the Convert Data Connection dialog box, enter the URL of the data connection library that you previously copied (delete "Forms/AllItems.aspx" and anything following it from the URL), enter a name for the data connection file at the end of the URL, and then click OK. It will take a few moments to convert and save the data connection file to the library. 8. Confirm that the data connection was converted successfully by examining the Details section of the Data Connections dialog box while the name of the converted data connection is selected. 9. Browse to the SharePoint data connection library, click the drop-down next to the name of the data connection, click Approve/Reject, click Approved, and then click OK. Take advantage of SharePoint Data Connection Library and other useful features of SharePoint family of products that include, SharePoint Foundation 2010, MOSS 2007 and free SharePoint templates or web parts.

    Read the article

  • How to center and scale Silverlight applications using ViewBox control

    - by Jacek Ciereszko
    There are many ways to make your application scalable in Web Browser window and align it in the center. Usually we use two Grid controls to align and panel control (like Canvas) to scale our apps. Not the best solution <UserControl … >     <Grid x:Name="LayoutRoot" Background="White">         <Grid HorizontalAlignment="Center" VerticalAlignment="Center">             <Canvas x:Name="scalePanel" VerticalAlignment="Top" HorizontalAlignment="Center">                 …             </Canvas>         </Grid>     </Grid> </UserControl>               The example above usually works but there are better ways. How? Use ViewBox. ViewBox control contains scale mechanisms with some stretching options. So ViewBox together with Grid control is all what we need to align and scale our applications. Good solution <UserControl … >     <Grid x:Name="LayoutRoot" Background="White">         <Viewbox>             ...         </Viewbox>     </Grid> </UserControl> How to find ViewBox control For those applications created in Silverlight 4, ViewBox is available in plug-in. For applications created in Silverlight 3 you can find it in Microsoft Silverlight Toolkit. Demo Let’s create a simple application that will contain: Button, TextBlock and red Rectangle. It will also have some Margin settings. This application won’t be in the center of window and it will not scale. <UserControl … >     <Grid x:Name="LayoutRoot">         <Grid Margin="100, 50, 100, 20">                 <StackPanel Orientation="Horizontal">                     <Button Width="100" Height="100" Content="test"/>                     <TextBlock Text="Button" Width="100" Height="100" />                     <Rectangle Width="100" Height="100" Fill="Red"/>                 </StackPanel>         </Grid> </Grid> </UserControl>   Run demo: RUN But If we use ViewBox control, we will got centered and always scaled application.    <Grid x:Name="LayoutRoot">         <Viewbox>             <Grid Margin="100, 50, 100, 20">                     <StackPanel Orientation="Horizontal">                         <Button Width="100" Height="100" Content="test"/>                         <TextBlock Text="bottom" Width="100" Height="100" />                         <Rectangle Width="100" Height="100" Fill="Red"/>                     </StackPanel>             </Grid>         </Viewbox>     </Grid> Link to application: RUN (try to resize application’s window) Link to source code: SilverlightCenterApplication.zip References ViewBox for Silverlight 3 http://silverlight.codeplex.com/    Polish version: http://jacekciereszko.pl/2010/05/jak-wysrodkowac-i-skalowac-aplikacje.html Jacek Ciereszko

    Read the article

  • Looking under the hood of SSRS

    - by Jim Giercyk
    SSRS is a powerful tool, but there is very little available to measure it’s performance or view the SSRS execution log or catalog in detail.  Here are a few simple queries that will give you insight to the system that you never had before.   ACTIVE REPORTS:  Have you ever seen your SQL Server performance take a nose dive due to a long-running report?  If the SPID is executing under a generic Report ID, or it is a scheduled job, you may have no way to tell which report is killing your server.  Running this query will show you which reports are executing at a given time, and WHO is executing them.   USE ReportServerNative SELECT runningjobs.computername,             runningjobs.requestname,              runningjobs.startdate,             users.username,             Datediff(s,runningjobs.startdate, Getdate()) / 60 AS    'Active Minutes' FROM runningjobs INNER JOIN users ON runningjobs.userid = users.userid ORDER BY runningjobs.startdate               SSRS CATALOG:  We have all asked “What was the last thing that changed”, or better yet, “Who in the world did that!”.  Here is a query that will show all of the reports in your SSRS catalog, when they were created and changed, and by who.           USE ReportServerNative SELECT DISTINCT catalog.PATH,                            catalog.name,                            users.username AS [Created By],                             catalog.creationdate,                            users_1.username AS [Modified By],                            catalog.modifieddate FROM catalog         INNER JOIN users ON catalog.createdbyid = users.userid  INNER JOIN users AS users_1 ON catalog.modifiedbyid = users_1.userid INNER JOIN executionlogstorage ON catalog.itemid = executionlogstorage.reportid WHERE ( catalog.name <> '' )               SSRS EXECUTION LOG:  Sometimes we need to know what was happening on the SSRS report server at a given time in the past.  This query will help you do just that.  You will need to set the timestart and timeend in the WHERE clause to suit your needs.         USE ReportServerNative SELECT catalog.name AS report,        executionlogstorage.username AS [User],        executionlogstorage.timestart,        executionlogstorage.timeend,         Datediff(mi,e.timestart,e.timeend) AS ‘Time In Minutes',        catalog.modifieddate AS [Report Last Modified],        users.username FROM   catalog  (nolock)        INNER JOIN executionlogstorage e (nolock)          ON catalog.itemid = executionlogstorage.reportid        INNER JOIN users (nolock)          ON catalog.modifiedbyid = users.userid WHERE  executionlogstorage.timestart >= Dateadd(s, -1, '03/31/2012')        AND executionlogstorage.timeend <= Dateadd(DAY, 1, '04/02/2012')      LONG RUNNING REPORTS:  This query will show the longest running reports over a given time period.  Note that the “>5” in the WHERE clause sets the report threshold at 5 minutes, so anything that ran less than 5 minutes will not appear in the result set.  Adjust the threshold and start/end times to your liking.  With this information in hand, you can better optimize your system by tweaking the longest running reports first.         USE ReportServerNative SELECT executionlogstorage.instancename,        catalog.PATH,        catalog.name,        executionlogstorage.username,        executionlogstorage.timestart,        executionlogstorage.timeend,        Datediff(mi, e.timestart, e.timeend) AS 'Minutes',        executionlogstorage.timedataretrieval,        executionlogstorage.timeprocessing,        executionlogstorage.timerendering,        executionlogstorage.[RowCount],        users_1.username        AS createdby,        CONVERT(VARCHAR(10), catalog.creationdate, 101)        AS 'Creation Date',        users.username        AS modifiedby,        CONVERT(VARCHAR(10), catalog.modifieddate, 101)        AS 'Modified Date' FROM   executionlogstorage e         INNER JOIN catalog          ON executionlogstorage.reportid = catalog.itemid        INNER JOIN users          ON catalog.modifiedbyid = users.userid        INNER JOIN users AS users_1          ON catalog.createdbyid = users_1.userid WHERE  ( e.timestart > '03/31/2012' )        AND ( e.timestart <= '04/02/2012' )        AND  Datediff(mi, e.timestart, e.timeend) > 5        AND catalog.name <> '' ORDER  BY 'Minutes' DESC        I have used these queries to build SSRS reports that I can refer to quickly, and export to Excel if I need to report or quantify my findings.  I encourage you to look at the data in the ReportServerNative database on your report server to understand the queries and create some of your own.  For instance, you may want a query to determine which reports are using which shared data sources.  Work smarter, not harder!

    Read the article

  • Generate a merge statement from table structure

    - by Nigel Rivett
    This code generates a merge statement joining on he natural key and checking all other columns to see if they have changed. The full version deals with type 2 processing and an audit trail but this version is useful. Just the insert or update part is handy too. Change the table at the top (spt_values in master in the version) and the join columns for the merge in @nk. The output generated is at the top and the code to run to generate it below. Output merge spt_values a using spt_values b on a.name = b.name and a.number = b.number and a.type = b.type when matched and (1=0 or (a.low b.low) or (a.low is null and b.low is not null) or (a.low is not null and b.low is null) or (a.high b.high) or (a.high is null and b.high is not null) or (a.high is not null and b.high is null) or (a.status b.status) or (a.status is null and b.status is not null) or (a.status is not null and b.status is null) ) then update set low = b.low , high = b.high , status = b.status when not matched by target then insert ( name , number , type , low , high , status ) values ( b.name , b.number , b.type , b.low , b.high , b.status ); Generator set nocount on declare @t varchar(128) = 'spt_values' declare @i int = 0 -- this is the natural key on the table used for the merge statement join declare @nk table (ColName varchar(128)) insert @nk select 'Number' insert @nk select 'Name' insert @nk select 'Type' declare @cols table (seq int, nkseq int, type int, colname varchar(128)) ;with cte as ( select ordinal_position, type = case when columnproperty(object_id(@t), COLUMN_NAME,'IsIdentity') = 1 then 3 when nk.ColName is not null then 1 else 0 end, COLUMN_NAME from information_schema.columns c left join @nk nk on c.column_name = nk.ColName where table_name = @t ) insert @cols (seq, nkseq, type, colname) select ordinal_position, row_number() over (partition by type order by ordinal_position) , type, COLUMN_NAME from cte declare @result table (i int, j int, k int, data varchar(500)) select @i = @i + 1 insert @result (i, data) select @i, 'merge ' + @t + ' a' select @i = @i + 1 insert @result (i, data) select @i, ' using cte b' select @i = @i + 1 insert @result (i, j, data) select @i, nkseq, ' ' + case when nkseq = 1 then 'on' else 'and' end + ' a.' + ColName + ' = b.' + ColName from @cols where type = 1 select @i = @i + 1 insert @result (i, data) select @i, ' when matched and (1=0' select @i = @i + 1 insert @result (i, j, k, data) select @i, seq, 1, ' or (a.' + ColName + ' b.' + ColName + ')' + ' or (a.' + ColName + ' is null and b.' + ColName + ' is not null)' + ' or (a.' + ColName + ' is not null and b.' + ColName + ' is null)' from @cols where type 1 select @i = @i + 1 insert @result (i, data) select @i, ' )' select @i = @i + 1 insert @result (i, data) select @i, ' then update set' select @i = @i + 1 insert @result (i, j, data) select @i, nkseq, ' ' + case when nkseq = 1 then ' ' else ', ' end + colname + ' = b.' + colname from @cols where type = 0 select @i = @i + 1 insert @result (i, data) select @i, ' when not matched by target then insert' select @i = @i + 1 insert @result (i, data) select @i, ' (' select @i = @i + 1 insert @result (i, j, data) select @i, seq, ' ' + case when seq = 1 then ' ' else ', ' end + colname from @cols where type 3 select @i = @i + 1 insert @result (i, data) select @i, ' )' select @i = @i + 1 insert @result (i, data) select @i, ' values' select @i = @i + 1 insert @result (i, data) select @i, ' (' select @i = @i + 1 insert @result (i, j, data) select @i, seq, ' ' + case when seq = 1 then ' ' else ', ' end + 'b.' + colname from @cols where type 3 select @i = @i + 1 insert @result (i, data) select @i, ' );' select data from @result order by i,j,k,data

    Read the article

  • Script to UPDATE STATISTICS with time window

    - by Bill Graziano
    I recently spent some time troubleshooting odd query plans and came to the conclusion that we needed better statistics.  We’ve been running sp_updatestats but apparently it wasn’t sampling enough of the table to get us what we needed.  I have a pretty limited window at night where I can hammer the disks while this runs.  The script below just calls UPDATE STATITICS on all tables that “need” updating.  It defines need as any table whose statistics are older than the number of days you specify (30 by default).  It also has a throttle so it breaks out of the loop after a set amount of time (60 minutes).  That means it won’t start processing a new table after this time but it might take longer than this to finish what it’s doing.  It always processes the oldest statistics first so it will eventually get to all of them.  It defaults to sample 25% of the table.  I’m not sure that’s a good default but it works for now.  I’ve tested this in SQL Server 2005 and SQL Server 2008.  I liked the way Michelle parameterized her re-index script and I took the same approach. CREATE PROCEDURE dbo.UpdateStatistics ( @timeLimit smallint = 60 ,@debug bit = 0 ,@executeSQL bit = 1 ,@samplePercent tinyint = 25 ,@printSQL bit = 1 ,@minDays tinyint = 30 )AS/******************************************************************* Copyright Bill Graziano 2010*******************************************************************/SET NOCOUNT ON;PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Launching...'IF OBJECT_ID('tempdb..#status') IS NOT NULL DROP TABLE #status;CREATE TABLE #status( databaseID INT , databaseName NVARCHAR(128) , objectID INT , page_count INT , schemaName NVARCHAR(128) Null , objectName NVARCHAR(128) Null , lastUpdateDate DATETIME , scanDate DATETIME CONSTRAINT PK_status_tmp PRIMARY KEY CLUSTERED(databaseID, objectID));DECLARE @SQL NVARCHAR(MAX);DECLARE @dbName nvarchar(128);DECLARE @databaseID INT;DECLARE @objectID INT;DECLARE @schemaName NVARCHAR(128);DECLARE @objectName NVARCHAR(128);DECLARE @lastUpdateDate DATETIME;DECLARE @startTime DATETIME;SELECT @startTime = GETDATE();DECLARE cDB CURSORREAD_ONLYFOR select [name] from master.sys.databases where database_id > 4OPEN cDBFETCH NEXT FROM cDB INTO @dbNameWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN SELECT @SQL = ' use ' + QUOTENAME(@dbName) + ' select DB_ID() as databaseID , DB_NAME() as databaseName ,t.object_id ,sum(used_page_count) as page_count ,s.[name] as schemaName ,t.[name] AS objectName , COALESCE(d.stats_date, ''1900-01-01'') , GETDATE() as scanDate from sys.dm_db_partition_stats ps join sys.tables t on t.object_id = ps.object_id join sys.schemas s on s.schema_id = t.schema_id join ( SELECT object_id, MIN(stats_date) as stats_date FROM ( select object_id, stats_date(object_id, stats_id) as stats_date from sys.stats) as d GROUP BY object_id ) as d ON d.object_id = t.object_id where ps.row_count > 0 group by s.[name], t.[name], t.object_id, COALESCE(d.stats_date, ''1900-01-01'') ' SET ANSI_WARNINGS OFF; Insert #status EXEC ( @SQL); SET ANSI_WARNINGS ON; END FETCH NEXT FROM cDB INTO @dbNameENDCLOSE cDBDEALLOCATE cDBDECLARE cStats CURSORREAD_ONLYFOR SELECT databaseID , databaseName , objectID , schemaName , objectName , lastUpdateDate FROM #status WHERE DATEDIFF(dd, lastUpdateDate, GETDATE()) >= @minDays ORDER BY lastUpdateDate ASC, page_count desc, [objectName] ASC OPEN cStatsFETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN IF DATEDIFF(mi, @startTime, GETDATE()) > @timeLimit BEGIN PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + '*** Time Limit Reached ***'; GOTO __DONE; END SELECT @SQL = 'UPDATE STATISTICS ' + QUOTENAME(@dBName) + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@ObjectName) + ' WITH SAMPLE ' + CAST(@samplePercent AS NVARCHAR(100)) + ' PERCENT;'; IF @printSQL = 1 PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + @SQL + ' (Last Updated: ' + CAST(@lastUpdateDate AS VARCHAR(100)) + ')' IF @executeSQL = 1 BEGIN EXEC (@SQL); END END FETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateEND__DONE:CLOSE cStatsDEALLOCATE cStatsPRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Completed.'GO

    Read the article

  • BizTalk: Suspend shape and Convoy

    - by Leonid Ganeline
    Part 1: BizTalk: Instance Subscription and Convoys: Details This is a Part 2. I am discussing the Suspend shape together with Convoys and going to show that using them together is undesirable. In previous article we investigated the Instance Subscriptions and how they could create situation with dangerous zones in processing.  Let' start with Suspend shape. [See the BizTalk Help] "You can use the Suspend shape to make an orchestration instance stop running until an administrator explicitly intervenes, perhaps to reflect an error condition that requires attention beyond the scope of the orchestration. All of the state information for the orchestration instance is saved, and will be reinstated when the administrator resumes the orchestration instance. When an orchestration instance is suspended, an error is raised. You can specify a message string to accompany the error to help the administrator diagnose the situation."   On the Suspend shape the orchestration is stopped in the Suspended (Resumable) state. Next we have two choices, one is to resume and the second is to terminate the orchestration. Is the orchestration is stopped or unenlisted? You don't find a note about it anywhere. The fact is the Orchestration is stopped and still enlisted. It is very important. So again, the suspended orchestration can be resumed or terminated. The moment when the operator or the operation script resumes or terminates can be far away. It is also important too. Let's go back to the case from previous article. Make sure you notice the convoy and the dangerous zone after the last Receive shape.     Now we have a Suspend shape inside the orchestration. The first orchestration instance is suspended. Next messages start new orchestration instance and have been consumed by this orchestration, right? Wrong! The orchestration is stopped on the Suspend shape but still enlisted. Now the dangerous zone, the "zombie zone" is expanded to the interval between the last receive and the moment of termination or end of the orchestration. The new orchestration instance for this convoy will not start till this moment. How fast operator finds out this suspended orchestration? Maybe hours or days. All this time orchestration is still enlisted and gathering the convoy messages. We can resume the orchestration but we cannot resume these messages together with orchestration. Seems the name Suspended of the orchestration is misleading. The orchestration can be in the Started (and Enlisted)/Stopped (and Enlisted)/Unenlisted state. The Suspend shape switches orchestration exactly to the Stopped state. The Stop name would describe the shape clearly and unambiguously and the Stopped state would describe the orchestration. Imagine we can change the BizTalk. The Orchestration editor can search these situations and returns the compile error. In similar case the Orchestration Editor forces us to use only ordered delivery port with convoys. The run-time core can force the orchestration with convoy be suspended in Unresumable state, that means the run-time unenlists the orchestration instance subscriptions. The Suspend shape name should be changed. The "Suspend" name is misleading. The "Stop" name is clear and unambiguous. The same for the orchestration state, it should be “Stopped” not “Suspended (Resumable)”.   Conclusion:  It is not recommended using a Suspend shape together with the convoy orchestrations.

    Read the article

  • dynamic? I'll never use that ... or then again, maybe it could ...

    - by adweigert
    So, I don't know about you, but I was highly skeptical of the dynamic keywork when it was announced. I thought to myself, oh great, just another move towards VB compliance. Well after seeing it being used in things like DynamicXml (which I use for this example) I then was working with a MVC controller and wanted to move some things like operation timeout of an action to a configuration file. Thinking big picture, it'd be really nice to have configuration for all my controllers like that. Ugh, I don't want to have to create all those ConfigurationElement objects... So, I started thinking self, use what you know and do something cool ... Well after a bit of zoning out, self came up with use a dynamic object duh! I was thinking of a config like this ...<controllers> <add type="MyApp.Web.Areas.ComputerManagement.Controllers.MyController, MyApp.Web"> <detail timeout="00:00:30" /> </add> </controllers> So, I ended up with a couple configuration classes like this ...blic abstract class DynamicConfigurationElement : ConfigurationElement { protected DynamicConfigurationElement() { this.DynamicObject = new DynamicConfiguration(); } public DynamicConfiguration DynamicObject { get; private set; } protected override bool OnDeserializeUnrecognizedAttribute(string name, string value) { this.DynamicObject.Add(name, value); return true; } protected override bool OnDeserializeUnrecognizedElement(string elementName, XmlReader reader) { this.DynamicObject.Add(elementName, new DynamicXml((XElement)XElement.ReadFrom(reader))); return true; } } public class ControllerConfigurationElement : DynamicConfigurationElement { [ConfigurationProperty("type", Options = ConfigurationPropertyOptions.IsRequired | ConfigurationPropertyOptions.IsKey)] public string TypeName { get { return (string)this["type"]; } } public Type Type { get { return Type.GetType(this.TypeName, true); } } } public class ControllerConfigurationElementCollection : ConfigurationElementCollection { protected override ConfigurationElement CreateNewElement() { return new ControllerConfigurationElement(); } protected override object GetElementKey(ConfigurationElement element) { return ((ControllerConfigurationElement)element).Type; } } And then had to create the meat of the DynamicConfiguration class which looks like this ...public class DynamicConfiguration : DynamicObject { private Dictionary<string, object> properties = new Dictionary<string, object>(StringComparer.CurrentCultureIgnoreCase); internal void Add<T>(string name, T value) { this.properties.Add(name, value); } public override bool TryGetMember(GetMemberBinder binder, out object result) { var propertyName = binder.Name; result = null; if (this.properties.ContainsKey(propertyName)) { result = this.properties[propertyName]; } return true; } } So all being said, I made a base controller class like a good little MVC-itizen ...public abstract class BaseController : Controller { protected BaseController() : base() { var configuration = ManagementConfigurationSection.GetInstance(); var controllerConfiguration = configuration.Controllers.ForType(this.GetType()); if (controllerConfiguration != null) { this.Configuration = controllerConfiguration.DynamicObject; } } public dynamic Configuration { get; private set; } } And used it like this ...public class MyController : BaseController { static readonly string DefaultDetailTimeout = TimeSpan.MaxValue.ToString(); public MyController() { this.DetailTimeout = TimeSpan.Parse(this.Configuration.Detail.Timeout ?? DefaultDetailTimeout); } public TimeSpan DetailTimeout { get; private set; } } And there I have an actual use for the dynamic keyword ... never thoguht I'd see the day when I first heard of it as I don't do much COM work ... oh dont' forget this little helper extension methods to find the controller configuration by the controller type.public static ControllerConfigurationElement ForType<T>(this ControllerConfigurationElementCollection collection) { Contract.Requires(collection != null); return ForType(collection, typeof(T)); } public static ControllerConfigurationElement ForType(this ControllerConfigurationElementCollection collection, Type type) { Contract.Requires(collection != null); Contract.Requires(type != null); return collection.Cast<ControllerConfigurationElement>().Where(element => element.Type == type).SingleOrDefault(); } Sure, it isn't perfect and I'm sure I can tweak it over time, but I thought it was a pretty cool way to take advantage of the dynamic keyword functionality. Just remember, it only validates you did it right at runtime, which isn't that bad ... is it? And yes, I did make it case-insensitive so my code didn't have to look like my XML objects, tweak it to your liking if you dare to use this creation.

    Read the article

  • VirtualBox Clone Root HD / Ubuntu / Network issue

    - by john.graves(at)oracle.com
    When you clone a root Ubuntu disk in VirtualBox, one thing that gets messed up is the network card definition.  This is because Ubuntu (as it should) uses UDEV IDs for the network device.  When you boot your new disk, the network device ID has changed, so it creates a new eth1 device.  Unfortunately, this conflicts with the VirtualBox network setup.  What to do? Boot the box (no network) Edit the /etc/udev/rules.d/70-persistent-net.rules Delete the eth0 line and modify the eth1 line to be eth0 --------- Example OLD ----------- # This file was automatically generated by the /lib/udev/write_net_rules # program, run by the persistent-net-generator.rules rules file. # # You can modify it, as long as you keep each rule on a single # line, and change only the value of the NAME= key. # PCI device 0x8086:0x100e (e1000) <-------------------- Delete these two lines SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:d8:8d:15", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x8086:0x100e (e1000) ---Modify the next line and change eth1 to be eth0 SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:89:84:98", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ---------------------------------------- --------- Example NEW ----------- # This file was automatically generated by the /lib/udev/write_net_rules # program, run by the persistent-net-generator.rules rules file. # # You can modify it, as long as you keep each rule on a single # line, and change only the value of the NAME= key. # PCI device 0x8086:0x100e (e1000) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:89:84:98", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ----------------------------------------

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation

    - by Stuart Brierley
    Having previously detailed the installation of the Community ODBC Adapter for BizTalk 2009, the next thing I will be looking at is the generation of schemas using this ODBC adapter. Within your BizTalk 2009 project, right click the project and select Add Generated Items.  In the resultant window choose Add Adapter Metadata and click Add to open the Add Adapter Wizard. Check that the BizTalk Server and Database names are correct, select the ODBC adapter and click next. You must now set the connection string. To start with choose set, then new DSN (data source name). You now need to define the Data Source you will be connecting to.  On the User DSN tab select Add add then driver you want to use. In this case I am going to use the MySQL ODBC Driver.  A User DSN will only be visible on the current machine with you as a user. * Although I initially set up a User DSN and this was fine for creating schemas with, I later realised that you actually need a system DSN as the BizTalk host service needs this to be able connect to the database on a receive or send port. You will then be asked to Set up the MySQL ODBC Data Source.  In my case this is a local database making use of named pipes, so I had to make sure that I ticked the "Force use of named pipes" check box and removed the "# The Pipe the MySQL Server will use socket=mysql" line from the mysql.ini; with this is place the connection would fail as there is no apparent way to specify the pipe name in the ODBC driver configuration. This will then update the User DSN tab with the new Data Source.  Make sure that you select it and press OK. Select it again in the Choose Data Source window and press OK.  On the ODBC transport window select next. You will now be presented with the Schema Information window, where you must supply the namespace, type and root element names for your schema. Next choose the type of statement that you will be using to create your schema - in this case I am using a stored procedure. *I later discovered that this option is fine for MySQL stored procedures without input parameters, but failed for MySQL stored procedures with input parameters.  (I will be posting on the way to handle input parameters soon) Next you will need to specify the name of the stored procedure.  In this case I have a simple stored procedure to return all the data held by my TestTable in MySQL. Select * from TestTable; The table itself has three columns: Name, Sex and Married. Selecting finish should now hopefully create your schemas based on the input and output from your stored procedure. In my case I have:   An empty schema for the request; after all I have no parameters for the stored procedure.  A response schema comprised of a Table Record with Name, Sex and Married children. Next I will be looking at the use of the ODBC adpater with: Receive ports Send ports

    Read the article

  • Controlling soft errors and false alarms in SSIS

    - by Jim Giercyk
    If you are like me, you dread the 3AM wake-up call.  I would say that the majority of the pages I get are false alarms.  The alerts that require action often require me to open an SSIS package, see where the trouble is and try to identify the offending data.  That can be very time-consuming and can take quite a chunk out of my beauty sleep.  For those reasons, I have developed a simple error handling scenario for SSIS which allows me to rest a little easier.  Let me first say, this is a high level discussion; getting into the nuts and bolts of creating each shape is outside the scope of this document, but if you have an average understanding of SSIS, you should have no problem following along. In the Data Flow above you can see that there is a caution triangle.  For the purpose of this discussion I am creating a truncation error to demonstrate the process, so this is to be expected.  The first thing we need to do is to redirect the error output.  Double-clicking on the Query shape presents us with the properties window for the input.  Simply set the columns that you want to redirect to Redirect Row in the dropdown box and hit Apply. Without going into a dissertation on error handling, I will just note that you can decide which errors you want to redirect on Error and on Truncation.  Therefore, to override this process for a column or condition, simply do not redirect that column or condition. The next thing we want to do is to add some information about the error; specifically, the name of the package which encountered the error and which step in the package wrote the record to the error table.  REMEMBER: If you redirect the error output, your package will not fail, so you will not know where the error record was created without some additional information.    I added 3 columns to my error record; Severity, Package Name and Step Name.  Severity is just a free-form column that you can use to note whether an error is fatal, whether the package is part of a test job and should be ignored, etc.  Package Name and Step Name are system variables. In my package I have created a truncation situation, where the firstname column is 50 characters in the input, but only 4 characters in the output.  Some records will pass without truncation, others will be sent to the error output.  However, the package will not fail. We can see that of the 14 input rows, 8 were redirected to the error table. This information can be used by another step or another scheduled process or triggered to determine whether an error should be sent.  It can also be used as a historical record of the errors that are encountered over time.  There are other system variables that might make more sense in your infrastructure, so try different things.  Date and time seem like something you would want in your output for example.  In summary, we have redirected the error output from an input, added derived columns with information about the errors, and inserted the information and the offending data into an error table.  The error table information can be used by another step or process to determine, based on the error information, what level alert must be sent.  This will eliminate false alarms, and give you a head start when a genuine error occurs.

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >