Search Results

Search found 977 results on 40 pages for 'biztalk mapper'.

Page 8/40 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Microsoft Advisory Services Engagement Scenario - BizTalk Server Performance Issues

    982896 ... Microsoft Advisory Services Engagement Scenario - BizTalk Server Performance IssuesThis RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Microsoft Advisory Services Engagement Scenario - BizTalk Server Performance Issues

    982896 ... Microsoft Advisory Services Engagement Scenario - BizTalk Server Performance IssuesThis RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Microsoft Advisory Services Engagement Scenario - BizTalk Server Solutions Design

    982880 ... Microsoft Advisory Services Engagement Scenario - BizTalk Server Solutions DesignThis RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Microsoft Advisory Services Engagement Scenario - BizTalk Server Solutions Design

    982880 ... Microsoft Advisory Services Engagement Scenario - BizTalk Server Solutions DesignThis RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Taking the data mapper approach in Zend Framework

    - by Seeker
    Let's assume the following tables setup for a Zend Framework app. user (id) groups (id) groups_users (id, user_id, group_id, join_date) I took the Data Mapper approach to models which basically gives me: Model_User, Model_UsersMapper, Model_DbTable_Users Model_Group, Model_GroupsMapper, Model_DbTable_Groups Model_GroupUser, Model_GroupsUsersMapper, Model_DbTable_GroupsUsers (for holding the relationships which can be seen as aentities; notice the "join_date" property) I'm defining the _referenceMap in Model_DbTable_GroupsUsers: protected $_referenceMap = array ( 'User' => array ( 'columns' => array('user_id'), 'refTableClass' => 'Model_DbTable_Users', 'refColumns' => array('id') ), 'App' => array ( 'columns' => array('group_id'), 'refTableClass' => 'Model_DbTable_Groups', 'refColumns' => array('id') ) ); I'm having these design problems in mind: 1) The Model_Group only mirrors the fields in the groups table. How can I return a collection of groups a user is a member of and also the date the user joined that group for every group? If I just added the property to the domain object, then I'd have to let the group mapper know about it, wouldn't I? 2) Let's say I need to fetch the groups a user belongs to. Where should I put this logic? Model_UsersMapper or Model_GroupsUsersMapper? I also want to make use of the referencing map (dependent tables) mechanism and probably use findManyToManyRowset or findDependentRowset, something like: $result = $this->getDbTable()->find($userId); $row = $result->current(); $groups = $row->findManyToManyRowset( 'Model_DbTable_Groups', 'Model_DbTable_GroupsUsers' ); This would produce two queries when I could have just written it in a single query. I will place this in the Model_GroupsUsersMapper class. An enhancement would be to add a getGroups method to the Model_User domain object which lazily loads the groups when needed by calling the appropriate method in the data mapper, which begs for the second question. Should I allow the domain object know about the data mapper?

    Read the article

  • How to do safely test Biztalk app by manipulating the Windows OS system time w/o breaking the Active Directory?

    - by melaos
    i have a biztalk - window service tied middleware application which talks to other system. recently we had a request to test for scenarios which relates to the date. as we have a lot of places in the application which uses the .net Datetime.Now value, we don't really want to go into the code level and change all these values. so we're looking at the simplest way to test which is to just change the OS time. but what we notice is that sometimes when we change the system date time, we will get account lock out due to Active Directory. So my question is what's a good and safe way that i can test for future dates, etc by changing the windows OS system date time but without causing any issues with the Active Directory. And where can i find out more about AD and how it issues token and what's the correlation with the system date time changes. Thanks! ~m

    Read the article

  • What happened to the Windows "Midi Mapper"

    - by interstar
    I wrote a windows program many years ago, which created music by sending notes to the "midi mapper" (and thence to the midi-synth on my sound-card) Today, I have a soft-synth which, allegedly accepts midi information, so I'd assume it should be possible to use today's equivalent of a midi-mapper to route the midi output from my program to the soft-synth. There's clearly no longer a midi-mapper application in windows, but my program still works (on XP) in that it drives the built-in soundcard synth, so there must be some sort of midi handling layer in windows. How can I get at this? And maybe redirect the midi to the soft-synth?

    Read the article

  • BizTalk: Instance Subscription: Details

    - by Leonid Ganeline
    It has interesting behavior and it is not always what we are waiting for. An orchestration can be enlisted with many subscriptions. In other word it can have several Receive shapes. Usually the first Receive uses the Activation subscription but other Receives create the Instance subscriptions. [See “Publish and Subscribe Architecture” in MSDN] Here is a sample process. This orchestration has two receives. It is a typical Sequential Convoy. [See "BizTalk Server 2004 Convoy Deep Dive" in MSDN by Stephen W. Thomas]. Let's experiment started.   There are three typical scenarios. First scenario: everything is OK Activation subscription for the Sample message is created when the orchestration the SampleProcess is enlisted. The Instance subscription is created only when the SampleProcess orchestration instance is started and it is removed when the orchestration instance is ended. So far so good, the Message_2 was delivered exactly in this time interval and was consumed. Second scenario: no consumers Three Sample_2 messages were delivered. One was delivered before the SampleProcess was started and before the instance subscription was created. Second message was delivered in the correct time interval. The third one was delivered after the SampleProcess orchestration was ended and the instance subscription was removed. Note: ·         It was not the first Sample_2 was consumed. It was first in the queue but in was not waiting, it was suspended when it was delivered to the Message Box and didn’t have any subscribers at this moment. The first and the last Sample_2 messages were Suspended (Nonresumable) in the Message Box. For each of this message we have got two (!) service instances associated with this suspended message. One service instance has the ServiceClass of Messaging, and we can see its Error Description:   The second service instance has the ServiceClass of RoutingFailureReport, and we can see its Error Description:   Third scenario: something goes wrong Two Sample_2 messages were delivered. Both were delivered in the same interval when the SampleProcess orchestration was working and the instance subscription was created and was working too. First Sample_2 was consumed. The second Sample_2 has the subscription but the subscriber, the SampleProcess orchestration, will not consume it. After the SampleProcess orchestration is ended (And only after! I will discuss this in the next article.), it is suspended (Nonresumable). In this time only one service instance associated with this kind of scenario is suspended. This service instance has the ServiceClass of Orchestration, and we can see its Error Description: In the Message tab we will see the Sample_2 message in the Suspended (Resumable) status. Note: ·         This behavior looks ambiguous. We see here the orchestration consumes the extra message(s) and gets suspended together with those extra messages. These messages are not consumed in term of “processed by orchestration”. But they are consumed in term of the “delivered to the subscriber”. The receive shape in the orchestration is not received these extra messages. But these messages are routed to the orchestration.     Unified Sequential convoy  Now one more scenario. It is the unified sequential convoy. That means the activation subscription is for the same message type as it for the instance subscription. The Sample_2 message is now the Sample message. For simplicity the SampleProcess orchestration consumes only two Sample messages. Usually the orchestration consumes a lot of messages inside loop, but now it is only two of them. First message starts the orchestration, the second message goes inside this orchestration. Then the next pair of messages follows, and so on. But if the input messages follow in shorter intervals we have got the problem. We lost messages in unpredictable manner. Note: ·         Maybe the better behavior would be if the orchestration removes the instance subscription after the message is consumed, not in the end on the orchestration. Right now it is a “feature” of the BizTalk subscription mechanism.

    Read the article

  • BizTalk&ndash;Mapping repeating EDI segments using a Table Looping functoid

    - by Bill Osuch
    BizTalk’s HIPAA X12 schemas have several repeating date/time segments in them, where the XML winds up looking something like this: <DTM_StatementDate> <DTM01_DateTimeQualifier>232</DTM01_DateTimeQualifier> <DTM02_ClaimDate>20120301</DTM02_ClaimDate> </DTM_StatementDate> <DTM_StatementDate> <DTM01_DateTimeQualifier>233</DTM01_DateTimeQualifier> <DTM02_ClaimDate>20120302</DTM02_ClaimDate> </DTM_StatementDate> The corresponding EDI segments would look like this: DTM*232*20120301~ DTM*233*20120302~ The DateTimeQualifier element indicates whether it’s the start date or end date – 232 for start, 233 for end. So in this example (an X12 835) we’re saying the statement starts on 3/1/2012 and ends on 3/2/2012. When you’re mapping from some other data format, many times your start and end dates will be within the same node, like this: <StatementDates> <Begin>20120301</Begin> <End>20120302</End> </StatementDates> So how do you map from that and create two repeating segments in your destination map? You could connect both the <Begin> and <End> nodes to a looping functoid, and connect its output to <DTM_StatementDate>, then connect both <Begin> and <End> to <DTM_StatementDate> … this would give you two repeating segments, each with the correct date, but how to add the correct qualifier? The answer is the Table Looping Functoid! To test this, let’s create a simplified schema that just contains the date fields we’re mapping. First, create your input schema: And your output schema: Now create a map that uses these two schemas, and drag a Table Looping functoid onto it. The first input parameter configures the scope (or how many times the records will loop), so drag a link from the StatementDates node over to the functoid. Yes, StatementDates only appears once, so this would make it seem like it would only loop once, but you’ll see in just a minute. The second parameter in the functoid is the number of columns in the output table. We want to fill two fields, so just set this to 2. Now drag the Begin and End nodes over to the functoid. Finally, we want to add the constant values for DateTimeQualifier, so add a value of 232 and another of 233. When all your inputs are configured, it should look like this: Now we’ll configure the output table. Click on the Table Looping Grid, and configure it to look like this: Microsoft’s description of this functoid says “The Table Looping functoid repeats with the looping record it is connected to. Within each iteration, it loops once per row in the table looping grid, producing multiple output loops.” So here we will loop (# of <StatementDates> nodes) * (Rows in the table), or 2 times. Drag two Table Extractor functoids onto the map; these are what are going to pull the data we want out of the table. The first input to each of these will be the output of the TableLooping functoid, and the second input will be the row number to pull from. So the functoid connected to <DTM01_DateTimeQualifier> will look like this: Connect these two functoids to the two nodes we want to populate, and connect another output from the Table Looping functoid to the <DTM_StatementDate> record. You should have a map that looks something like this: Create some sample xml, use it as the TestMap Input Instance, and you should get a result like the XML at the top of this post. Technorati Tags: BizTalk, EDI, Mapping

    Read the article

  • Call Web Service via BizTalk Orchestration via received file

    This example shows how some business logic can be implemented by receiving a file into a BizTalk Orchestration and calling a Web Service. The results of the Web Service call are decided upon from the contents of the incoming file and the response message is constructed accordingly. The response message is also saved down to the local file system.  read moreBy BiZTech KnowDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SAB BizTalk Archiving Pipeline Component v0.2

    - by Stuart Brierley
    Just released to Codeplex is an updated version of my archiving pipeline component for BizTalk. The changes in this release are: Addition of FTP adapter macros to the base macros and File adapter macros. Fix for the issue of garbage collection of data streams within pipelines as discussed in this previous blog entry. Now looks for OutboundTransportType in addition to InboundTransportType to pick up send port transport type; Therefore changed %InboundTransportType% macro to %TransportType%. An initial outline of the project can be read here.

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Fixing up Configurations in BizTalk Solution Files

    - by Elton Stoneman
    Just a quick one this, but useful for mature BizTalk solutions, where over time the configuration settings can get confused, meaning Debug configurations building in Release mode, or Deployment configurations building in Development mode. That can cause issues in the build which aren't obvious, so it's good to fix up the configurations. It's time-consuming in VS or in a text editor, so this bit of PowerShell may come in useful - just substitute your own solution path in the $path variable: $path = 'C:\x\y\z\x.y.z.Integration.sln' $backupPath = [System.String]::Format('{0}.bak', $path) [System.IO.File]::Copy($path, $backupPath, $True) $sln = [System.IO.File]::ReadAllText($path)   $sln = $sln.Replace('.Debug|.NET.Build.0 = Deployment|.NET', '.Debug|.NET.Build.0 = Development|.NET') $sln = $sln.Replace('.Debug|.NET.Deploy.0 = Deployment|.NET', '.Debug|.NET.Deploy.0 = Development|.NET') $sln = $sln.Replace('.Debug|Any CPU.ActiveCfg = Deployment|.NET', '.Debug|Any CPU.ActiveCfg = Development|.NET') $sln = $sln.Replace('.Deployment|.NET.ActiveCfg = Debug|Any CPU', '.Deployment|.NET.ActiveCfg = Release|Any CPU') $sln = $sln.Replace('.Deployment|Any CPU.ActiveCfg = Debug|Any CPU', '.Deployment|Any CPU.ActiveCfg = Release|Any CPU') $sln = $sln.Replace('.Deployment|Any CPU.Build.0 = Debug|Any CPU', '.Deployment|Any CPU.Build.0 = Release|Any CPU') $sln = $sln.Replace('.Deployment|Mixed Platforms.ActiveCfg = Debug|Any CPU', '.Deployment|Mixed Platforms.ActiveCfg = Release|Any CPU') $sln = $sln.Replace('.Deployment|Mixed Platforms.Build.0 = Debug|Any CPU', '.Deployment|Mixed Platforms.Build.0 = Release|Any CPU') $sln = $sln.Replace('.Deployment|.NET.ActiveCfg = Debug|Any CPU', '.Deployment|.NET.ActiveCfg = Release|Any CPU') $sln = $sln.Replace('.Debug|.NET.ActiveCfg = Deployment|.NET', '.Debug|.NET.ActiveCfg = Development|.NET')   [System.IO.File]::WriteAllText($path, $sln) The script creates a backup of the solution file first, and then fixes up all the configs to use the correct builds. It's a simple search and replace list, so if there are any patterns that need to be added let me know and I'll update the script. A RegEx replace would be neater, but when it comes to hacking solution files, I prefer the conservative approach of knowing exactly what you're changing.

    Read the article

  • Changing the BizTalk message output file name

    - by Bill Osuch
    By default, BizTalk creates the filename of the message dropped to a send port as %MessageID%, which is the unique identifier (GUID) of the message. What if you want to create your own filename? To start, create a simple schema, and a basic orchestration that will receive the message and send it right back out, like this: If you deploy this and wire up the ports, you can drop an xml file into your receive port and have it come out at your send port named something like {7A63CAF8-317B-49D5-871F-9FD57910C3A0}.xml. Now, we'll create a new message with a custom filename. First, create a new orchestration variable called NewFileName, of the type System.String. Next, create a second message using the same schema as the message you're receiving in the Receive shape. Now, drag a Construct Message shape to the orchestration. In the shape's properties, set Messages Constructed to be the new message you just created. Double click the Message Assignment shape (inside the Construct shape...) and paste in the following code: Message_2 = Message_1;   NewFileName = Message_1(FILE.ReceivedFileName); NewFileName = NewFileName.Replace(".xml","_"); NewFileName = NewFileName + "output_" + System.DateTime.Now.Year.ToString() + "-" + System.DateTime.Now.Month.ToString();   Message_2(FILE.ReceivedFileName) = NewFileName; Here we make a copy of the received message, get it's original file name (ReceivedFileName), replace its extension with an underscore, and date-stamp it. Finally, add a Send shape and a Port to the surface, and configure them to send the message you just created. You should wind up with an orchestration like this: Deploy it, and create a new send port. It should be just about identical to the first send port, except this time the file name will be "%SourceFileName%.xml" (without the quotes of course). Fire up the application, drop in a test file, and you should now get both the xml file named with a GUID, and a second file named something along the lines of "MySchemaTestFile_output_2011-6.xml".

    Read the article

  • BizTalk HL7 Receive Pipeline Exception

    - by Paul Petrov
    If you experience sequence of errors below with BizTalk HL7 MLLP receive ports you may need to request a hotfix from Microsoft. Knowledge base article number is 2454887 but it’s still not available on the KB site. The hotfix is recently released and you may need to open support ticket to get to it. It requires three other hotfixes installed: ·         970492 (DASM 3.7.502.2) ·         973909 (additional ACK codes) ·         981442 (Microsoft.solutions.btahl7.mllp.dll 3.7.509.2) If the exceptions below repeatedly appear in the event log you most likely would be helped by the hotfix: Fatal error encountered in 2XDasm. Exception information is Cannot access a disposed object. Object name: 'CEventingReadStream'. There was a failure executing the receive pipeline: "BTAHL72XPipelines.BTAHL72XReceivePipeline, BTAHL72XPipelines, Version=1.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Source: "BTAHL7 2.X Disassembler" Receive Port: "ReceivePortName" URI: "IPAddress:portNumber" Reason: Cannot access a disposed object. Object name: 'CEventingReadStream'. The Messaging Engine received an error from transport adapter "MLLP" when notifying the adapter with the BatchComplete event. Reason "Object reference not set to an instance of an object." We’ve been through a lot of troubleshooting with Microsoft Product Support and they did a great job finding an issue and releasing a fix.

    Read the article

  • Ubuntu Device-mapper seems to be invincible!

    - by Andrew Bolster
    I'm working on a hopefully unrelated question question and I've got to a strange situation. First: I know very little about the very low level hardware kernal storage driver magix, so I'm hoping a) someone can help and b) someone can explain it to me better. I've been trying a dozen different configurations of my 2x500GB SATA drives over the past few hours involving switching between ACHI/IDE/RAID in my bios; After each attempt I've reset the bios option, booted into a live CD, deleting partitions and rewriting partition tables left on the drives. Now, however, I've been sitting with a /dev/mapper/nvidia_XXXXXXX1 that seems to be impossible to kill! its the only 'partition' that i see in the Ubuntu install (but I can see the others in parted) but it is only the size of one of the drives, and I know I did not set any RAID levels other than RAID0. Anyone have any ideas how I can kill this and get back to just two independent IDE drives? Or can anyone convince me of a reason to go the AHCI route? Many thanks in advance.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Receive Location

    - by Stuart Brierley
    I have previously talked about the installation of the Community ODBC adapter and also using the ODBC adapter to generate schemas.  But what about creating a receive location? An ODBC receive location will periodically poll the configured database using the stored procedure or SQL string defined in your request schema. If you need to, begin by adding a new receive port to your BizTalk configuration. Create a new receive location and select to use the ODBC adapter and click Address. You will now be shown the ODBC Community Adapter Transport properties window.  Select connection string and you will be shown the Choose data Source window.  If you have already created the Test Database source when generating a schema from ODBC this will be shown (if not go and take a look in my previous post to see how this is done).   You will then need to choose the SQL command that will be run by the recieve port.  In this case I have deployed the Test Mapping schemas that I created previously and selected the Request schema. You should now have populated the appropriate properties for the ODBC Com Adapter. Finally set the standard receive location properties and your ODBC receive location is now ready.

    Read the article

  • BizTalk 2009 - Scoped Record Counting in Maps

    - by StuartBrierley
    Within BizTalk there is a functoid called Record Count that will return the number of instances of a repeated record or repeated element that occur in a message instance. The input to this functoid is the record or element to be counted. As an example take the following Source schema, where the Source message has a repeated record called Box and each Box has a repeated element called Item: An instance of this Source schema may look as follows; 2 box records - one with 2 items and one with only 1 item. Our destination schema has a number of elements and a repeated box record.  The top level elements contain totals for the number of boxes and the overall number of items.  Each box record contains a single element representing the number of items in that box. Using the Record Count functoid it is easy to map the top level elements, producing the expected totals of 2 boxes and 3 items: We now need to map the total number of items per box, but how will we do this?  We have already seen that the record count functoid returns the total number of instances for the entire message, and unfortunately it does not allow you to specify a scoping parameter.  In order to acheive Scoped Record Counting we will need to make use of a combination of functoids. As you can see above, by linking to a Logical Existence functoid from the record/element to be counted we can then feed the output into a Value Mapping functoid.  Set the other Value Mapping parameter to "1" and link the output to a Cumulative Sum functoid. Set the other Cumulative Sum functoid parameter to "1" to limit the scope of the Cumulative Sum. This gives us the expected results of Items per Box of 2 and 1 respectively. I ran into this issue with a larger schema on a more complex map, but the eventual solution is still the same.  Hopefully this simplified example will act as a good reminder to me and save someone out there a few minutes of brain scratching.

    Read the article

  • BizTalk 2009 - Error when Testing Map with Flat File Source Schema

    - by StuartBrierley
    I have recently been creating some flat file schemas using the BizTalk Server 2009 Flat File Schema Wizard.  I have then been mapping these flat file schemas to a "normal" xml schema format. I have not previsouly had any cause to map flat files and ran into some trouble when testing the first of these flat file maps; with an instance of the flat file as the source it threw an XSL transform error: Test Map.btm: error btm1050: XSL transform error: Unable to write output instance to the following <file:///C:\Documents and Settings\sbrierley\Local Settings\Temp\_MapData\Test Mapping\Test Map_output.xml>. Data at the root level is invalid. Line 1, position 1. Due to the complexity of the map in question I decided to created a small test map using the same source and destination schemas to see if I could pinpoint the problem.  Although the source message instance vaildated correctly against the flat file schema, when I then tested this simplified map I got the same error. After a time of fruitless head scratching and some serious google time I figured out what the problem was. Looking at the map properties I noticed that I had the test map input set to "XML" - for a flat file instance this should be set to "native".

    Read the article

  • BizTalk Schema Validation

    - by Christopher House
    Perhaps this one should be filed under:  Obvious Yesterday I created a new schema that is going to be used for a WCF receive.  The schema has a bunch of restrictions in it, with the intention that we'd validate incoming messages against the schema.  I'd never done message validation with BizTalk but I knew the XmlDisassembler component had an option for validating, so I figured it would be a piece of cake.  Sadly, that was not to be the case.  I deployed my artifacts and configured my receive location's XmlDisassembler with what I thought to be the correct document spec name.  I entered My.Project.Name.SchemaTypeName for the document spec and started running unit tests.  All of them failed with the following error logged in the event log: "WcfReceivePort_BizTalkWcfService/PurchaseOrderService" URI: "/BizTalkWcfService/PurchaseOrderService.svc" Reason: No Disassemble stage components can recognize the data. I went to the receive port and turned on tracking, submitted another message, then went to the admin console and saved the message.  It looked correct, but just to be sure, I manually validated it against the schema in my project.  As expected, it validated correctly. After a bit of thinking on this, I realized that I probably needed to fully qualify my document spec name, meaning, include the assembly name, as well as the type name.  So, I went back to the receive location and changed the document spec to: My.Project.Name.SchemaTypeName, My.Project.Name,Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxx I re-ran my unit tests and everything was working as expected.  So, note to self:  remember to include the assembly name when setting the document spec.  If you need an easy way to determine your schema name and assembly name, find your schema in the admin console and go to it's properties.  On the property screen, look at the Name and Assembly properties.  Your document spec will be "SchemaName, AssemblyName"

    Read the article

  • Limiting calls to WCF services from BizTalk

    - by IntegrationOverload
    ** WORK IN PROGRESS ** This is just a placeholder for the full article that is in progress. The problem My BTS solution was receiving thousands of messages at once. After processing by BTS I needed to send them on via one of several WCF services depending on the message content. The problem is that due to the asynchronous nature of BizTalk the WCF services were getting hammered and could not cope with the load. Note: It is possible to limit the SOAP calls in the BtsNtSvc.exe.Config file but that does not have the desired results for Net-TCP WCF services. The solution So I created a new MessageType for the messages in question and posted them to the BTS messaeg box. This schema included the URL they were being sent to as a promoted property. I then subscribed to the message type from a new orchestraton (that does just the WCF send) using the URL as a correlation ID. This created a singleton orchestraton that was instantiated when the first message hit the message box. It then waits for further messages with the same correlation ID and type and processs them one at a time using a loop shape with a timer (A pretty standard pattern for processing related messages) Image to go here This limits the number of calls to the individual WCF services to 1. Which is a good start but the service can handle more than that and I didn't want to create a bottleneck. So I then constructed the Correlation ID using the URL concatinated with a random number between 1 and 10. This makes 10 possible correlation IDs per URL and so 10 instances of the singleton Orchestration per WCF service. Just what I needed and the upper random number is a configuration value in SSO so I can change the maximum connections without touching the code.

    Read the article

  • How to control messages to the same port from different emitters?

    - by Alex In Paris
    Scene: A company has many factories X, each emits a message to the same receive port in a Biztalk server Y; if all messages are processed without much delay, each will trigger an outgoing message to another system Z. Problem: Sometimes a factory loses its connection for a half-day or more and, when the connection is reestablished, thousands of messages get emitted. Now, the messages still get processed well by Y (Biztalk can easily handle the load) but system Z can't handle the flood and may lock up and severely delay the processing of all other messages from the other X. What is the solution? Creating multiple receive locations that permits us to pause one X or another would lose us information if the factory isn't smart enough to know whether the message was received or not. What is the basic pattern to apply in Biztalk for this problem? Would some throttling parameters help to limit the flow from any one X? Or are their techniques on the end part of Y which I should use instead ? I would prefer this last one since I can be confident that the message box will remember any failures, which could then be resumed.

    Read the article

  • .odx Files in BizTalk 2009 (VS2008 IDE) Fail to open. "Unspecified Error"

    - by AllenG
    Okay, I'm not sure if this is an SO question, or a ServerFault question, but I figured I'd post here first. I have a BizTalk project which works like a champ for its original design. It's been deployed and it's working fine. Today, I went in to add some new fuctionality by modifying one of my orchestrations. When I attempted to open it, I got a message which simply stated: "The operation could not be completed. Unspecified error." I've closed the IDE and re-opened; I've restarted the machine; I've even allowed Microsoft Updates to run and restart the machine. Everything else opens just fine (.xsd files, .btms, etc.) so it appears only to be the orchestrations which are failing. Has anyone ever encountered a similar issue and resolved it (short of reinstalling BizTalk/VS, or blowing the orchs away and rebuilding)? Any help would be appreciated.

    Read the article

  • How to call a Biztalk net.TCP service from Raw TCP request?

    - by Burhan
    I have written a net.tcp based service in Biztalk 2006 R2 and it listens at a location, http://localhost:5060/WCFTcpService I need to call this service by using Raw TCP request. i.e. I don't want to create a proxy class and consume it in a .NET client application. How can I be able to do this? The real scenario is that an Oracle Stored procedure will be used to communicate with this service and the only way I am allowed to call this service is to send a TCP request to the Biztalk server that is hosting the service. Any help or tips would be really appreciated. Thanks.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >