Search Results

Search found 1995 results on 80 pages for 'flow'.

Page 9/80 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • How can I rewrite this (cleanly) without gotos?

    - by Jared P
    How can I do this cleanly without gotos? loop: if(condition1){ something(); } else if (condition2) { somethingDifferent(); } else { mostOfTheWork(); goto loop; } I'd prefer not to use breaks as well. Furthermore, it is expected to loop several (adv 40) times before doing something else, so the mostOfTheWork part would most likely be as high up as possible, even if just for readability. Thanks in advance.

    Read the article

  • Oracle Functional Testing Suite Advanced Pack for Oracle EBS Now Available

    - by Anne Carlson (Oracle Development)
    There’s new news about automated testing of E-Business Suite using the Oracle Application Testing Suite, a.k.a, “OATS”. E-Business Suite Development is pleased to announce the availability of the new Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite. The new pack, available with the latest release of Oracle Application Testing Suite (12.4.0.2), provides pre-built test components and flows to automate the in-depth testing of Oracle E-Business Suite applications. Designed for use with the Oracle Application Testing Suite and its Oracle Flow Builder capability, these pre-built components and flows can help Oracle E-Business Suite customers to significantly reduce the time and effort needed to create and maintain automated test scripts. The Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite is available now for EBS 12.1.3, and availability for EBS 12.2 is planned. Some Background on Automating Testing with Oracle Application Testing Suite and Oracle Flow Builder      Testing complex packaged applications like Oracle E-Business Suite can be time-consuming and challenging for organizations, hampering their ability to upgrade to latest releases or apply latest patches. Oracle Application Testing Suite offers organizations a unique and powerful testing platform for Oracle E-Business Suite and other Oracle applications. With the 12.3.0.1 release of Oracle Application Testing Suite, we introduced the Oracle Flow Builder testing framework and accompanying starter pack of pre-built test components and flows. The starter pack, which contains over 2000 components and 200 flows, provides broad coverage of commonly-used base functionality and is designed to jump-start the test automation effort. Using Oracle Flow Builder, even non-technical testers can create working test scripts using the pre-built components that Oracle provides. Each component represents an atomic test operation such as “create an invoice batch” or “apply an invoice hold.” Testers can assemble the pre-built components into test flows, and combine test flows with spreadsheet data to drive the testing of multiple data conditions. The Oracle Flow Builder framework allows customers to add, modify and extend the pre-built components to address new functionality and customizations of the Oracle E-Business Suite. Using Oracle Flow Builder’s component-based test generation framework instead of a traditional record/playback approach has allowed the EBS Quality Assurance team to reduce their test automation effort by 60%. E-Business Suite customers can significantly reduce their test automation effort using Oracle Application Testing Suite with Oracle Flow Builder and the pre-built test components and flows that Oracle provides. Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite Improves Test Coverage With the Oracle Application Testing Suite 12.4.0.2 and the new Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite, we are now delivering a significant number of additional test components and flows beyond those contained in the Oracle Flow Builder starter pack. These additional test components and flows provide 70-80% test coverage and enable the automation of detailed and complex test flows across the following Oracle E-Business Suite products: Oracle Asset Lifecycle Management Oracle Channel Revenue Management Oracle Discrete Manufacturing Oracle Incentive Compensation Oracle Lease and Finance Management Oracle Process Manufacturing Oracle Procurement Oracle Project Management Oracle Property Manager Oracle Service Downloads You can download the Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite from the Oracle Technology Network. References Oracle Applications Testing Suite YouTube: Oracle Flow Builder Training YouTube: Oracle Applications Testing Suite and Flow Builder Demonstration Oracle Functional Testing Suite Advanced Pack Readme for E-Business Suite, id=1905989.1">Note 1905989.1 Related Articles Automate Testing Using Oracle Application Testing Suite with Flow Builder for E-Business Suite EBS 12.1.1 Test Starter Kit Now Available for Oracle Applications Testing Suite Oracle Application Testing Suite 9.0 Supported with Oracle E-Business Suite Using the Oracle Application Testing Suite with EBS: Interim Update #1

    Read the article

  • does the concept of flow apply to tcp as well as udp?

    - by liv2hak
    I have a very large network trace file which contains both tcp and udp packets.I want to find out the flows in the trace file.For that I have a hash function which takes in source ip address,destination ip address,source port,destination port and protocol.In case of TCP I can understand that the flow means all the packets which have the same 5 parameters same.But what does it mean in case of UDP.how does the concept of flow apply in case of UDP.? I am a novice in packet processing.

    Read the article

  • How to split and dispatch an async control-flow using Continuations?

    - by hotzen
    Hello, I have an asynchronous control-flow like the following: ActorA ! DoA(dataA, callback1, callbackOnErrorA) def callback1() = { ... ActorB ! DoB(dataB, callback2, callbackOnErrorB) } def callback2() = { ActorC ! DoC(dataC, callback3, callbackOnErrorC) } ... How would I divide this flow into several parts (continuations) and sequentially dispatch these to different actors (or threads/tasks) while maintaining the overall state? Any hint appreciated, Thanks

    Read the article

  • BAM Data Control in multiple ADF Faces Components

    - by [email protected]
    As we know Oracle BAM data control instance sharing is not supported.When two or more ADF Faces components must display the same data, and are bound to the same Oracle BAM data control definition, we have to make sure that we wrap each ADF Faces component in an ADF task flow, and set the Data Control Scope to isolated. This blog will show a small sample to demonstrate this. In this sample we will create a Pie and Bar using same BAM DC, such that both components use same Data control but have isolated scope.This sample can be downloaded  fromSample1.zip Set-up: Create a BAM data control using employees DO (sample) Steps: Right click on View Controller project and select "New->ADF Task Flow" Check "Create Bounded Task Flow" and give some meaningful name (ex:EmpPieTF.xml ) to the TaskFlow(TF) and click on "OK"CreateTF.bmpFrom the "Components Palette", drag and drop "View" into the task flow diagram. Give a meaningful name to the view. Double Click and Click "Ok" for  "Create New JSF Page Fragment" From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Pie" (Pie: Sales_Number and Slices: Salesperson) Repeat step 1 through 4 for another Task Flow (ex: EmpBarTF). From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Bar" (Bars :Sales_Number and X-axis : Salesperson). Open the Taskflow created in step 2. In the Structure Pane, right click on "Task Flow Definition -EmpPieTF" Click "Insert inside Task Flow Definition - EmpPieTF -> ADF Task Flow -> Data Control Scope". Click "OK"TFDCScope.bmpFor the "Data Control Scope", In the Property Inspector ->General section, change data control scope from Shared to Isolated. Repeat step 8 through 11 for the 2nd Task flow created. Now create a new jspx page example: Main.jspxDrag and drop both the Task flows (ex: "EmpPieTF" and "EmpBarTF") as regions. Surround with panel components as needed.Run the page Main.jspxMainPage.bmpNow when the page runs although both components are created using same Data control the bindings are not shared and each component will have a separate instance of the data control.

    Read the article

  • Download - Upload is too slow on Centos

    - by Mehdi
    My download/upload in server and out of server is too slow (around 50 KB/s !) ! Did I miss some configuration ? Some information: CentOS release 6.3 uptime load average: 0.17, 0.32, 0.37 Memory free -m total used free shared buffers cached Mem: 24009 21988 2021 0 806 18098 -/+ buffers/cache: 3083 20926 Swap: 4095 28 4067 lshw -C network *-network description: Ethernet interface product: 82574L Gigabit Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 00 serial: 00:25:90:70:17:4a size: 100MB/s capacity: 1GB/s width: 32 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=off broadcast=yes driver=e1000e driverversion=1.9.5-k duplex=full firmware=2.1-2 ip=108.175.8.123 latency=0 link=yes multicast=yes port=twisted pair speed=100MB/s resources: irq:16 memory:fb900000-fb91ffff ioport:e000(size=32) memory:fb920000-fb923fff ethtool ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off MDI-X: off Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000001 (1) Link detected: yes dmesg |grep e1000e dmesg |grep e1000e e1000e: Intel(R) PRO/1000 Network Driver - 1.9.5-k e1000e: Copyright(c) 1999 - 2012 Intel Corporation. e1000e 0000:02:00.0: Disabling ASPM L0s e1000e 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 e1000e 0000:02:00.0: setting latency timer to 64 e1000e 0000:02:00.0: irq 33 for MSI/MSI-X e1000e 0000:02:00.0: irq 34 for MSI/MSI-X e1000e 0000:02:00.0: irq 35 for MSI/MSI-X e1000e 0000:02:00.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:25:90:70:17:4a e1000e 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection e1000e 0000:02:00.0: eth0: MAC: 3, PHY: 8, PBA No: FFFFFF-0FF e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e 0000:02:00.0: eth0: Unsupported Speed/Duplex configuration e1000e: eth0 NIC Link is Up 10 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e 0000:02:00.0: Disabling ASPM L1 e1000e 0000:02:00.0: eth0: changing MTU from 1500 to 9000 e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO

    Read the article

  • Running Chromium Flow on a USB disk to boot on school computers?

    - by user38008
    So I gonna make a USB bootable drive of Hexxeh's ChromiumOS Flow. I want this to be able to boot into it on my school computer. The computers use Novel network and each student gets their own login. Can I use my USB drive to boot into ChromiumOS from the schools computer? I dont need to access the files on my Novel Login. So can I even do this? Thanks.

    Read the article

  • Importing CSV filte to SQL server...

    - by sam
    HI guys, I am trying to import CSV file to SQL server database, no success, I am still newbie to sql server, thanks Operation stopped... Initializing Data Flow Task (Success) Initializing Connections (Success) Setting SQL Command (Success) Setting Source Connection (Success) Setting Destination Connection (Success) Validating (Success) Messages Warning 0x80049304: Data Flow Task 1: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console. (SQL Server Import and Export Wizard) Prepare for Execute (Success) Pre-execute (Success) Messages Information 0x402090dc: Data Flow Task 1: The processing of file "D:\test.csv" has started. (SQL Server Import and Export Wizard) Executing (Error) Messages Error 0xc002f210: Drop table(s) SQL Task 1: Executing the query "drop table [dbo].[test] " failed with the following error: "Cannot drop the table 'dbo.test', because it does not exist or you do not have permission.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. (SQL Server Import and Export Wizard) Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column ""Code"" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.". (SQL Server Import and Export Wizard) Error 0xc020902a: Data Flow Task 1: The "output column ""Code"" (38)" failed because truncation occurred, and the truncation row disposition on "output column ""Code"" (38)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. (SQL Server Import and Export Wizard) Error 0xc0202092: Data Flow Task 1: An error occurred while processing file "D:\test.csv" on data row 21. (SQL Server Import and Export Wizard) Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Source - test_csv" (1) returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. (SQL Server Import and Export Wizard) Copying to [dbo].[test] (Stopped) Post-execute (Success) Messages Information 0x402090dd: Data Flow Task 1: The processing of file "D:\test.csv" has ended. (SQL Server Import and Export Wizard) Information 0x402090df: Data Flow Task 1: The final commit for the data insertion in "component "Destination - test" (70)" has started. (SQL Server Import and Export Wizard) Information 0x402090e0: Data Flow Task 1: The final commit for the data insertion in "component "Destination - test" (70)" has ended. (SQL Server Import and Export Wizard) Information 0x4004300b: Data Flow Task 1: "component "Destination - test" (70)" wrote 0 rows. (SQL Server Import and Export Wizard)

    Read the article

  • How to implement login page using Spring Security so that it works with Spring web flow?

    - by simon
    I have a web application using Spring 2.5.6 and Spring Security 2.0.4. I have implemented a working login page, which authenticates the user against a web service. The authentication is done by defining a custom authentincation manager, like this: <beans:bean id="customizedFormLoginFilter" class="org.springframework.security.ui.webapp.AuthenticationProcessingFilter"> <custom-filter position="AUTHENTICATION_PROCESSING_FILTER" /> <beans:property name="defaultTargetUrl" value="/index.do" /> <beans:property name="authenticationFailureUrl" value="/login.do?error=true" /> <beans:property name="authenticationManager" ref="customAuthenticationManager" /> <beans:property name="allowSessionCreation" value="true" /> </beans:bean> <beans:bean id="customAuthenticationManager" class="com.sevenp.mobile.samplemgmt.web.security.CustomAuthenticationManager"> <beans:property name="authenticateUrlWs" value="${WS_ENDPOINT_ADDRESS}" /> </beans:bean> The authentication manager class: public class CustomAuthenticationManager implements AuthenticationManager, ApplicationContextAware { @Transactional @Override public Authentication authenticate(Authentication authentication) throws AuthenticationException { //authentication logic return new UsernamePasswordAuthenticationToken(principal, authentication.getCredentials(), grantedAuthorityArray); } The essential part of the login jsp looks like this: <c:url value="/j_spring_security_check" var="formUrlSecurityCheck"/> <form method="post" action="${formUrlSecurityCheck}"> <div id="errorArea" class="errorBox"> <c:if test="${not empty param.error}"> ${sessionScope["SPRING_SECURITY_LAST_EXCEPTION"].message} </c:if> </div> <label for="loginName"> Username: <input style="width:125px;" tabindex="1" id="login" name="j_username" /> </label> <label for="password"> Password: <input style="width:125px;" tabindex="2" id="password" name="j_password" type="password" /> </label> <input type="submit" tabindex="3" name="login" class="formButton" value="Login" /> </form> Now the problem is that the application should use Spring Web Flow. After the application was configured to use Spring Web Flow, the login does not work anymore - the form action to "/j_spring_security_check" results in a blank page without error message. What is the best way to adapt the existing login process so that it works with Spring Web Flow?

    Read the article

  • Is there a free (as in beer) Flow chart generator for COBOL Code?

    - by btelles
    Hi I've never read COBOL in my life and have been tasked with rewriting the old COBOL code in a new language. Are there any free or free-to-try software packages out there that will generate a flow chart for a COBOL program? I've looked at "Visustin" and "Code Visual to Flowchart" Visustin blanks out part of the code and does random rotations in the demo version, which causes the demo to be less accurate. I couldn't get Code Visual Flow Chart to work correctly with our code. Know of any other packages I might try?

    Read the article

  • When using TCP load balancing with HAProxy, does all outbound traffic flow through the LB?

    - by user122875
    I am setting up an app to be hosted using VMs(probably amazon, but that is not set in stone) which will require both HTTP load balancing and load balancing a large number(50k or so if possible) of persistant TCP connections. The amount of data is not all that high, but updates are frequent. Right now I am evaluating load balancers and am a bit confused about the architecture of HAProxy. If I use HAProxy to balance the TCP connections, will all the resulting traffic have to flow through the load balancer? If so, would another solution(such as LVS or even nginx_tcp_proxy_module) be a better fit?

    Read the article

  • How do you reach a "flow" state while programming?

    - by acrosman
    I'm not talking about program flow, but as in the state of working called flow, the state where you can get great work done the most effectively. I find that my current work environment while good in many ways does not allow me to get into a good state of mind for writing code most of the time (my job includes many other functions). If it's critical to get something done I'll often put on head-phones with classical music and try to drown out the office noise around me (and discourage co-workings from asking me questions). I am best able to get work done late in the evening when the house is quite and I've been thinking about the project for most of the day. What tricks have you found when working in less then perfect office environments?

    Read the article

  • Is it possible to transfer data between html pages driven by spring web flow?

    - by Easwaramoorthy Kanagaraj
    I am aware of passing data between jsp in spring web flow. Is it possible to transfer data between html pages driven by spring web flow. I don't want to use the HTML5 local storage capabilities. Example: Page 1: Search box for an employee id. Page 2: Search result for the employee details. Two ways that I could think: Get the employee details in page 1 by ajax and pass the result to the page two. Pass the employee id to page 2 and get the result by ajax in onload. In both case I need to pass any variable/data. I am confused in doing this. Is there anything in the Spring webflow using which I could do this? Thanks in advance, Easwar

    Read the article

  • Export Import error 'SSIS Data Flow Task could not be created' ... registering DTSPipeline.dll, cannot create task "STOCK:PipelineTask"

    - by Moin Zaman
    I'm about to throw in the towel on this one. Running SQL Server 2008 enterprise on Windows 7 x64. Can't get past this issue. When I try to Import / Export Data from databases through SQL Server Management Studio I get the following Error. Error: TITLE: SQL Server Import and Export Wizard ------------------------------ The SSIS Data Flow Task could not be created. Verify that DTSPipeline.dll is available and registered. The wizard cannot continue and it will terminate. ------------------------------ ADDITIONAL INFORMATION: Cannot create a task with the name "STOCK:PipelineTask". Verify that the name is correct. ({0194F10C-9860-4A4F-AF8B-DE7EFD89859F}) I have tried many solutions found via Google, but none of them have worked. A side issue that may be related is when I try to create an Integration Services Project in Business Intelligence Studio I get a 'project creation failed' error.

    Read the article

  • How can I include a .eps figure within a Tikz simple flow chart?

    - by Jan
    Hi, I would like to create a simple flow chart in latex with the TikZ package similar to the following example http://www.texample.net/tikz/examples/simple-flow-chart/ However I would like to include figures (a time series plot created in R, as eps or something else) within the flowchart (e.g. for example within a {block}? \documentclass{article} \usepackage[latin1]{inputenc} \usepackage{tikz} \usetikzlibrary{shapes,arrows} \begin{document} \pagestyle{empty} % Define block styles \tikzstyle{decision} = [diamond, draw, fill=blue!20, text width=4.5em, text badly centered, node distance=3cm, inner sep=0pt] \tikzstyle{block} = [rectangle, draw, fill=blue!20, text width=5em, text centered, rounded corners, minimum height=4em] \tikzstyle{line} = [draw, -latex'] \tikzstyle{cloud} = [draw, ellipse,fill=red!20, node distance=3cm, minimum height=2em] \begin{tikzpicture}[node distance = 2cm, auto] % Place nodes \node [block] (init) {initialize model}; \node [cloud, left of=init] (expert) {expert}; \node [cloud, right of=init] (system) {system}; \node [block, below of=init] (identify) {identify candidate models}; \node [block, below of=identify] (evaluate) {evaluate candidate models}; \node [block, left of=evaluate, node distance=3cm] (update) {update model}; \node [decision, below of=evaluate] (decide) {is best candidate better?}; \node [block, below of=decide, node distance=3cm] (stop) {stop}; % Draw edges \path [line] (init) -- (identify); \path [line] (identify) -- (evaluate); \path [line] (evaluate) -- (decide); \path [line] (decide) -| node [near start] {yes} (update); \path [line] (update) |- (identify); \path [line] (decide) -- node {no}(stop); \path [line,dashed] (expert) -- (init); \path [line,dashed] (system) -- (init); \path [line,dashed] (system) |- (evaluate); \end{tikzpicture} \end{document} Thanks, Jan

    Read the article

  • Indesign and XML - how to auto flow into multiple pages with differing styles?

    - by MetaDan
    Hi guys, I've got a bit of a problem at the moment. I'm trying to work with indesign (cs3) and xml. Basically i have a template which is has 1 master dps, both pages have the same data (fields 1-5) but one is left aligned, one right - hence mildly different paragraph styles. What i want to be able to do is import xml and have indesign flow the data from the individual nodes into many pages. eg xml format: root day field1 field2 field3 field4 field5 day field1 field2 field3 field4 field5 day ... I can almost make this work by tagging the frames on the master pages, then creating pages and importing the xml, however it only flows the first 2 nodes into the pages reptitively for the total count of all the nodes. I can also almost make it work by creating a page from the untagged masters and then tagging the frames with the field1-5 tags then importing the xml. This populates the first page, however i then can't find a way to make the rest of the data flow into new pages... Am I missing something? Am I being a complete dumbass? If anyone can offer any help it will be greatly appreciated...

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Exploring packages in code

    In my previous post Searching for tasks with code you can see how to explore the control flow side of packages, drilling down through containers, task, and event handlers, but it didn’t cover the data flow. I recently saw a post on the MSDN forum asking how to edit an existing package programmatically, and the sticking point was how to find the the data flow and the components inside. This post builds on some of the previous code and shows how you can explore all objects inside a package. I took the sample Task Search application I’d written previously, and came up with a totally pointless little console application that just walks through the package and writes out the basic type and name of every object it finds, starting with the package itself e.g. Package – MyPackage . The sample package we used last time showed nested objects as well an event handler; a OnPreExecute event tucked away on the task SQL In FEL. The output of this sample tool would look like this: PackageObjects v1.0.0.0 (1.0.0.26627) Copyright (C) 2009 Konesans Ltd Processing File - Z:\Users\Darren Green\Documents\Visual Studio 2005\Projects\SSISTestProject\EventsAndContainersWithExe cSQLForSearch.dtsx Package - EventsAndContainersWithExecSQLForSearch For Loop - FOR Counter Loop Task - SQL In Counter Loop Sequence Container - SEQ For Each Loop Wrapper For Each Loop - FEL Simple Loop Task - SQL In FEL Task - SQL On Pre Execute for FEL SQL Task Sequence Container - SEQ Top Level Sequence Container - SEQ Nested Lvl 1 Sequence Container - SEQ Nested Lvl 2 Task - SQL In Nested Lvl 2 Task - SQL In Nested Lvl 1 #1 Task - SQL In Nested Lvl 1 #2 Connection Manager – LocalHost The code is very similar to what we had previously, but there are a couple of extra bits to deal with connections and to look more closely at a task and see if it is a Data Flow task. For connections your just examine the package's Connections collection as shown in the abridged snippets below. First you can see the call to the ProcessConnections method, followed by the method itself. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { // Write out the package name Console.WriteLine("Package - {0}", package.Name); ... More ... // Look and the connections ProcessConnections(package.Connections); } private static void ProcessConnections(Connections connections) { foreach (ConnectionManager connectionManager in connections) { Console.WriteLine("Connection Manager - {0}", connectionManager.Name); } } What we didn’t see in the sample output above was anything to do with the Data Flow, but rest assured the code now handles it too. The following snippet shows how each task is examined to see if it is a Data Flow task, and if so we can then loop through all of the components inside the data flow. private static void ProcessTaskHost(TaskHost taskHost) { if (taskHost == null) { return; } Console.WriteLine("Task - {0}", taskHost.Name); // Check if the task is a Data Flow task MainPipe pipeline = taskHost.InnerObject as MainPipe; if (pipeline != null) { ProcessPipeline(pipeline); } } private static void ProcessPipeline(MainPipe pipeline) { foreach (IDTSComponentMetaData90 componentMetadata in pipeline.ComponentMetaDataCollection) { Console.WriteLine("Pipeline Component - {0}", componentMetadata.Name); // If you wish to make changes to the component then you should really use the managed wrapper. // CManagedComponentWrapper wrapper = componentMetadata.Instantiate(); // wrapper.SetComponentProperty("PropertyName", "Value"); } } Hopefully you can see how we get a reference to the Data Flow task, and then use the ComponentMetaDataCollection to find out what components we have inside the pipeline. If you wanted to know more about the component you could look at the ObjectType or ComponentClassID properties. After that it gets a bit harder and you should get a reference to the wrapper object as the comment suggest and start using the properties, just like you would in the create packages samples, see our Code Development category for some for these examples. Download Sample code project PackageObjects.zip (5KB)

    Read the article

  • Exploring packages in code

    In my previous post Searching for tasks with code you can see how to explore the control flow side of packages, drilling down through containers, task, and event handlers, but it didn’t cover the data flow. I recently saw a post on the MSDN forum asking how to edit an existing package programmatically, and the sticking point was how to find the the data flow and the components inside. This post builds on some of the previous code and shows how you can explore all objects inside a package. I took the sample Task Search application I’d written previously, and came up with a totally pointless little console application that just walks through the package and writes out the basic type and name of every object it finds, starting with the package itself e.g. Package – MyPackage . The sample package we used last time showed nested objects as well an event handler; a OnPreExecute event tucked away on the task SQL In FEL. The output of this sample tool would look like this: PackageObjects v1.0.0.0 (1.0.0.26627) Copyright (C) 2009 Konesans Ltd Processing File - Z:\Users\Darren Green\Documents\Visual Studio 2005\Projects\SSISTestProject\EventsAndContainersWithExe cSQLForSearch.dtsx Package - EventsAndContainersWithExecSQLForSearch For Loop - FOR Counter Loop Task - SQL In Counter Loop Sequence Container - SEQ For Each Loop Wrapper For Each Loop - FEL Simple Loop Task - SQL In FEL Task - SQL On Pre Execute for FEL SQL Task Sequence Container - SEQ Top Level Sequence Container - SEQ Nested Lvl 1 Sequence Container - SEQ Nested Lvl 2 Task - SQL In Nested Lvl 2 Task - SQL In Nested Lvl 1 #1 Task - SQL In Nested Lvl 1 #2 Connection Manager – LocalHost The code is very similar to what we had previously, but there are a couple of extra bits to deal with connections and to look more closely at a task and see if it is a Data Flow task. For connections your just examine the package's Connections collection as shown in the abridged snippets below. First you can see the call to the ProcessConnections method, followed by the method itself. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { // Write out the package name Console.WriteLine("Package - {0}", package.Name); ... More ... // Look and the connections ProcessConnections(package.Connections); } private static void ProcessConnections(Connections connections) { foreach (ConnectionManager connectionManager in connections) { Console.WriteLine("Connection Manager - {0}", connectionManager.Name); } } What we didn’t see in the sample output above was anything to do with the Data Flow, but rest assured the code now handles it too. The following snippet shows how each task is examined to see if it is a Data Flow task, and if so we can then loop through all of the components inside the data flow. private static void ProcessTaskHost(TaskHost taskHost) { if (taskHost == null) { return; } Console.WriteLine("Task - {0}", taskHost.Name); // Check if the task is a Data Flow task MainPipe pipeline = taskHost.InnerObject as MainPipe; if (pipeline != null) { ProcessPipeline(pipeline); } } private static void ProcessPipeline(MainPipe pipeline) { foreach (IDTSComponentMetaData90 componentMetadata in pipeline.ComponentMetaDataCollection) { Console.WriteLine("Pipeline Component - {0}", componentMetadata.Name); // If you wish to make changes to the component then you should really use the managed wrapper. // CManagedComponentWrapper wrapper = componentMetadata.Instantiate(); // wrapper.SetComponentProperty("PropertyName", "Value"); } } Hopefully you can see how we get a reference to the Data Flow task, and then use the ComponentMetaDataCollection to find out what components we have inside the pipeline. If you wanted to know more about the component you could look at the ObjectType or ComponentClassID properties. After that it gets a bit harder and you should get a reference to the wrapper object as the comment suggest and start using the properties, just like you would in the create packages samples, see our Code Development category for some for these examples. Download Sample code project PackageObjects.zip (5KB)

    Read the article

  • Grails web flow: Possible to include the state name in the url?

    - by Kimble
    I'm playing around with Google Analytics goals / funnels. I would like to use this to track a checkout process, but the web flow urls are formated in a way that makes this very hard. Would it be possible to change: http://localhost:8080/checkout/checkout?execution=e1s2 .. into something more like this: http://localhost:8080/checkout/checkout/shipping?execution=e1s2 or any thing else that would make them more trackable with Google Analytics?

    Read the article

  • ADF Taskflow Reentry-not-allowed and Reentry-allowed

    - by raghu.yadav
    Here is the sample usecase to demonstrate how reentry-not-allowed and reentry-allowed properties works. what doc says about these 2 properties : reentry-allowed: Reentry is allowed on any view activity within the ADF bounded task flow reentry-not-allowed: Reentry of the ADF bounded task flow is not allowed. If you specify reentry-not-allowed on a task flow definition, an end user can still click the browser back button and return to a page within the bounded task flow. However, if the user does anything on the page such as clicking a button, an exception (for example, InvalidTaskFlowReentry) is thrown indicating the bounded task flow was reentered improperly. The actual reentry condition is identified upon the submit of the reentered page. Ingrediants : main.jspx - Jobs_TF - jobs.jspx scenario. click RunTrx button in main.jspx navigates to jobs page by entering into Jobs taskflow. click jobs page back button to navigate back to main.jspx, now click browser back button to navigate jobs.jspx and then click jobs page back Button to see reentry-not-allowed error message.

    Read the article

  • Where can I find a description of the old British Standard structured flow charts?

    - by Steve314
    Some professional organisation defined these in, IIRC, the early 80s as similar to the more well known flow charts, but "structured". Instead of having arbitrary "goto" arrows, they had the equivalent of loops etc. They were standardized, and I vaguely remember studying them briefly at O Level. Of course they were about as useful as the well-known chocolate teapot, but I'd still like to be able to find a reference guide for them if possible - for roughly the same reason I was looking for a reference for standard Basic a while back. Google tells me - well, nothing really. They may as well never have existed. Which is probably nearly (and perhaps completely) true - I certainly never heard of them anywhere else except when I was at school. There's a chance that they may even be my computer science teachers little joke.

    Read the article

  • Asp.net HttpWebResponse - how can I not depend on WebException for flow control?

    - by Campos
    I need to check whether the request will return a 500 Server Internal Error or not (so getting the error is expected). I'm doing this: HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest; request.Method = "GET"; HttpWebResponse response = request.GetResponse() as HttpWebResponse; if (response.StatusCode == HttpStatusCode.OK) return true; else return false; But when I get the 500 Internal Server Error, a WebException is thrown, and I don't want to depend on it to control the application flow - how can this be done?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >