Search Results

Search found 14416 results on 577 pages for 'business logic'.

Page 150/577 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Suggestions needed on an architecture for a multiple clients and customisable web application

    - by ValidfroM
    Our product is a web based course managemant system. We have 10+ clients and in future we may get more clients. (Asp.net,SQL Server) Currently if one of our customers need extra functionality or customised business logic, we will change the db schema and code to meet the needs. (we only have one branch code base and one database schema) To make the change wont affect each others route, we use a client flag, which defined in a web config file, thus those extra fields and biz logic only applied to a particular customer's system. if(ClientId = 'ABC') { //DO ABC Stuff } else { //Normal Route } One of our senior colleagues said, in this way, small company like us can save resources on supporting multiple resources. But what I feel is, this strategy makes our code and database even harder to maintain. Anyone there crossed similar situation? How do you handle that?

    Read the article

  • What is the situation about OpenGL under Ubuntu Unity and Gnome3?

    - by user827992
    In a GNU/linux distribution is usually installed Xorg as main graphical server, it operates with a client-server logic, a special windows is designate as desktop environment and this special windows can handle all the eyecandy stuff like decorations, icons and effects. The problem is that the latest UI heavily relies on hardware acceleration, Unity is an overlay on Compiz and the Gnome-shell also require an active driver for the GPU to work well: the problem is: on the same OS I can find multiple implementations of OpenGL who is handling my OpenGL buffer? how the OpenGL buffer is managed compared to the other windows? how can I be sure that my OpenGL implementation is glued to the hardware and is not related to the client-server logic of Xorg? For example I have tried the clutter library and I have only experienced problems under both Unity and GTK/Gnome, no problem under other OS.

    Read the article

  • should I extend or create instance of the class

    - by meWantToLearn
    I have two classes Class A and Class B in Class A, i have three methods that perform the save, delete and select operation based upon the object I pass them. in Class B I perform the logic operations, such as modification to the property of the object before being passed to the methods of Class A, My problem is in Class B, should it extend Class A, and call the methods of class A , by parent::methodName or create instance of class A and then call Class A does not includes any property just methods. class A{ public function save($obj){ //code here } public function delete($obj){ //code here } public function select($obj){ //code here } } //Should I extend class A, and call the method by parent::methodName($obj) or create an instance of class A, call the method $instanceOfA-methodName($obj); class B extends A{ public function checkIfHasSaved($obj){ if($obj->saved == 'Yes'){ parent::save($obj); //**should I call the method like this** $instanceOFA = new A(); //**or create instance of class A and call without extending class A** instanceOFA->save($obj); } //other logic operations here } }

    Read the article

  • Make a flowchart to demonstrate closure behavior

    - by thomas
    I saw below test question the other day in which the author's used a flow chart to represent the logic of loops. And I got to thinking it would be interesting to do this with some more complex logic. For example, the closure in this IIFE sort of boggles me. while (i <= qty_of_gets) { // needs an IIFE (function(i) promise = promise.then(function(){ return $.get("queries/html/" + product_id + i + ".php"); }); }(i++)); } I wonder if seeing a flowchart representation of what happens in it could be more elucidating. Could such a thing be done? Would it be helpful? Or just messy? I haven't the foggiest clue where to start, but thought maybe someone would like to take a stab. Probably all the ajax could go and it could just be a simple return within the IIFE.

    Read the article

  • Is possible to write too many asserts?

    - by Lex Fridman
    I am a big fan of writing assert checks in C++ code as a way to catch cases during development that cannot possibly happen but do happen because of logic bugs in my program. This is a good practice in general. However, I've noticed that some functions I write (which are part of a complex class) have 5+ asserts which feels like it could potentially be a bad programming practice, in terms of readability and maintainability. I think it's still great, as each one requires me to think about pre- and post-conditions of functions and they really do help catch bugs. However, I just wanted to put this out there to ask if there is a better paradigms for catching logic errors in cases when a large number of checks is necessary.

    Read the article

  • DDD / Layers and legacy systems

    - by CSM
    I have to refactor a complex C# app (many dialogs, mixed logic and so on). There is a part managing the communication with special hardware equipments (sending commands and receive data via asynchronous c# callbacks). The code is "spaghetti" with mixed UI/Logic/Communication/etc and my task is to split the layers in a DDD sense. So, to which layer belongs a callback driver routine? The callbacks are creating "bubbles" in the system, up to the UI layer and because of this I cannot enforce the essential principle that any element of a layer depends only on other elements in the same layer or on elements of the layers "beneath" it. Thank you in advance.

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • Do reactive extensions and ETL go together?

    - by Aaron Anodide
    I don't fully understand reactive extensions, but my inital reading caused me think about the ETL code I have. Right now its basically a workflow to to perform various operations in a certain sequence based on conditions it find as it progresses. I can also imagine an event driven way such that only a small amount of imperative logic causes a chain reaction to occur. Of course I don't need a new type of programming model to make an event driven collaboration like that. Just the same I am wondering if ETL is a good fit for potentially exploring Rx further. Is my connection in a valid direction even? If not, could you briefly correct the error in my logic?

    Read the article

  • Architecture for dashbaord showing aggregated stats

    - by soulnafein
    I'm trying to find the best architecture for an application that shows a dashboard with aggregated stats that come from another one (e.g. number of sales in the last 12 months, current sales this month, a fairly complex score, performance of users over last 30 days, etc.) There is a fair bit of business logic that lives in Application 1 but the aggregated data gets saved in Application 2 (dashboard). What's the best way to create the aggregate data? 1) Pull data directly from Application 1 database and duplicate business logic for score calculation etc. 2) Push data from Application 1 to Application 2 somehow 3) Aggregate data in Application 1 on the fly and provide and api for Application 2 4) Other (probably) Please suggest solutions, Thanks.

    Read the article

  • enemy behavior with boundary to change direction

    - by BadSniper
    I'm doing space shooter kind of game, the logic is to reflect the enemy if it hits the boundary. With my logic, sometimes enemy behaves like flickering instead of changing the velocity. It's like trapped in the boundary and checking for if loops. This is my code for velocity changing: if(this->enemyPos.x>14) { this->enemyVel.x = -this->enemyVel.x; } if(this->enemyPos.x<-14) { this->enemyVel.x = -this->enemyVel.x; } How can I get around this? Its going out of boundary and don't know where to go and after sometimes its coming into field. I know whats the problem is, I dont know how to get around this problem.

    Read the article

  • Several classes need to access the same data, where should the data be declared?

    - by Juicy
    I have a basic 2D tower defense game in C++. Each map is a separate class which inherits from GameState. The map delegates the logic and drawing code to each object in the game and sets data such as the map path. In pseudo-code the logic section might look something like this: update(): for each creep in creeps: creep.update() for each tower in towers: tower.update() for each missile in missiles: missile.update() The objects (creeps, towers and missiles) are stored in vector-of-pointers. The towers must have access to the vector-of-creeps and the vector-of-missiles to create new missiles and identify targets. The question is: where do I declare the vectors? Should they be members of the Map class, and passed as arguments to the tower.update() function? Or declared globally? Or are there other solutions I'm missing entirely?

    Read the article

  • while(true) and loop-breaking - anti-pattern?

    - by KeithS
    Consider the following code: public void doSomething(int input) { while(true) { TransformInSomeWay(input); if(ProcessingComplete(input)) break; DoSomethingElseTo(input); } } Assume that this process involves a finite but input-dependent number of steps; the loop is designed to terminate on its own as a result of the algorithm, and is not designed to run indefinitely (until cancelled by an outside event). Because the test to see if the loop should end is in the middle of a logical set of steps, the while loop itself currently doesn't check anything meaningful; the check is instead performed at the "proper" place within the conceptual algorithm. I was told that this is bad code, because it is more bug-prone due to the ending condition not being checked by the loop structure. It's more difficult to figure out how you'd exit the loop, and could invite bugs as the breaking condition might be bypassed or omitted accidentally given future changes. Now, the code could be structured as follows: public void doSomething(int input) { TransformInSomeWay(input); while(!ProcessingComplete(input)) { DoSomethingElseTo(input); TransformInSomeWay(input); } } However, this duplicates a call to a method in code, violating DRY; if TransformInSomeWay were later replaced with some other method, both calls would have to be found and changed (and the fact that there are two may be less obvious in a more complex piece of code). You could also write it like: public void doSomething(int input) { var complete = false; while(!complete) { TransformInSomeWay(input); complete = ProcessingComplete(input); if(!complete) { DoSomethingElseTo(input); } } } ... but you now have a variable whose only purpose is to shift the condition-checking to the loop structure, and also has to be checked multiple times to provide the same behavior as the original logic. For my part, I say that given the algorithm this code implements in the real world, the original code is the most readable. If you were going through it yourself, this is the way you'd think about it, and so it would be intuitive to people familiar with the algorithm. So, which is "better"? is it better to give the responsibility of condition checking to the while loop by structuring the logic around the loop? Or is it better to structure the logic in a "natural" way as indicated by requirements or a conceptual description of the algorithm, even though that may mean bypassing the loop's built-in capabilities?

    Read the article

  • best practice for initializing class members in php

    - by rgvcorley
    I have lots of code like this in my constructors:- function __construct($params) { $this->property = isset($params['property']) ? $params['property'] : default_val; } Is it better to do this rather than specify the default value in the property definition? i.e. public $property = default_val? Sometimes there is logic for the default value, and some default values are taken from other properties, which was why I was doing this in the constructor. Should I be using setters so all the logic for default values is separated?

    Read the article

  • How to make safe and secure forms in asp.net MVC 3

    - by anirudha
    the asp.net application need all kind of security. unsecure forms may be influence by XSS [cross site scripting] there is some way to solve these type of problem in MVC. first sollution is that use <%= Html.AntiForgeryToken() %> for make secure from cross site scripting. it’s work by machine key in MVC. well you can valid them whenever you got respond from client. you can apply by this attribute on action you give the response behalf of form submission [ValidateAntiForgeryToken] you can secondly use authorize attribute where you can make own definition of authorize attribute in asp.net mvc for more info read david’s post well I am use my own custom attribute who use a different type of authorization :- the who controller use a attribute I put their and the attribute I put their have a logic and logic check the cookie in request who make sure that request they got from user.

    Read the article

  • Use Android NDK for portability with iOS?

    - by J-F L-R
    I am currently planning to implement a little painting app using OpenGL ES 1.1. I believe this question applies to any OpenGL ES project. I am starting development on Android and I would like to know if you would recommend writing the drawing logic (using OpenGL) in C++ with the NDK so it will easier to port to iOS, or to use the Java API and being locked on Android. The reason I am asking that is because I have seen mixed opinions on the Web about using the NDK (some people say it is an added level of complexity). From what I have already seen, I believe that I should go with the Java API since I am starting on Android and then, if I decide to go on iOS, to rewrite the OpenGL logic in Objective-C or C++. This should be pretty straightforward since the calls appear to be the same in both languages. What do you think? Am I right?

    Read the article

  • High traffic chat - how to check if there is new message and show it for all users

    - by user2633999
    I already had question about this but obviously it was not accepted very well, apparently too long when it's actually more information so you could have given me better answer. Ok, I will be much clearer now. Best possible logic to develop scalable chat in terms of stability, storing/reading messages on chat, updating chat on new message for all users etc.? I have most of this developed, the logic I think I miss is -- check if there is new message and show it for all users. I have this implemented but it crashes the site due to its traffic of 300k-400k people, so that's my main question. The chat is PHP based and uses Pusher (www.pusher.com) for instant messaging but it lacks what I need because it's more like a websocket. I'm using hardcoded files to keep messages (want to avoid database as much as possible). It's a no extension type of file, I'm sure you know. I'm getting crash with $fp = fopen(..., "w"); // pretend ... is the path and filename fwrite($fp, $msg); //hardcode the message fclose($fp); where $msg is the message itself. I'm having 1 file per message. I show last 150 messages = 150 file accesses and reads, yeah it's too much I guess. I have better logic now which I'm pursuing and that is 1 file with last 50-100 messages at all time. Sure it should be much better. How does it crash, that's the trickiest part because everything seems ordinary, believe me it is difficult to determine what exactly crashes the site, but in like 5 minutes when I try to open the site it's gone, then I put the old content without chat and is back online again. I'm having jquery post every 1 second to check if there is new message. I'm using timestamp in a special file where I keep the time last message was sent and if ((time() - time in file) <= 2) = reload last 150 messages including the last one. Too much input/output, write/read or however to say it I think is what crashes the site.

    Read the article

  • How to do lazy loading in a SOA fashion?

    - by sun1991
    For example, I've got a root object exposed in a SOA service, say Invoice (Invoice has line items as children). Sometimes, I need to retrieve its detail line items. I'm thinking to make it lazy loading, because it's a traffic overhead to transfer line items every time Invoice is required. But in SOA fashion, it seems unlikely. Because all it can expose are Invoice POCOs, with contain no logic. Thus I cannot attach my lazy loading logic to Invoice to instruct it to load lines items when needed.

    Read the article

  • How do I share common classes between windows forms and web applications using C#?

    - by earthdog
    In our environment we have multiple ERP servers plus data that are coming from multiple sources. I need to create a development roadmap for the coming years as it is obvious that side applications will be needed for many things. The choice is that the development will occur with Microsoft technologies. This means that I will be building either web apps (MVC, web forms e.t.c) or standard windows forms applications. The thing here is that I will be creating classes that will encapsulate the business logic that I want to apply to the different projects. What I thought so far: 1) Create class libraries dlls containing the required logic 2) Reuse these libs in my apps. But what about the web apps? Should I use the dlls directly or I should encapsulate them in web services and consume the web services in web apps? In general I would like to find out how should I build my strategy.

    Read the article

  • Synthetic database records

    - by michipili
    Assume we are getting some statistics from a customer which we analyse and we send our comments to the customer. Now, the customer tells us that the statistic they computed between January and March are based on a wrong methodology and sends us corrected series. We want perform analysis with the wrong and with the correct set of data, which are huge and only differ from January to March. Therefore, we need something like synthetic database records implementing the following logic: synthetic[1] = wrong_data synthetic[2] = correct_data between Januar and March, wrong_data otherwise With this, we can easily perform our analyses on synthetic records. Should such synthetic records be implemented in the application logic or on the side of the database? What are common pitfalls of such an implementation?

    Read the article

  • "Never do in code what you can get the SQL server to do well for you" - Is this a recipe for a bad design?

    - by PhonicUK
    It's an idea I've heard repeated in a handful of places. Some more or less acknowledging that once trying to solve a problem purely in SQL exceeds a certain level of complexity you should indeed be handling it in code. The logic behind the idea is that for the large majority of cases, the database engine will do a better job at finding the most efficient way of completing your task than you could in code. Especially when it comes to things like making the results conditional on operations performed on the data. Arguably with modern engines effectively JIT'ing + caching the compiled version of your query it'd make sense on the surface. The question is whether or not leveraging your database engine in this way is inherently bad design practice (and why). The lines become blurred further when all the logic exists inside the database and you're just hitting it via an ORM.

    Read the article

  • How do you deal with monotony of certain tasks? [on hold]

    - by aaronmallen
    I love programming methods, and functions. The if {}, while {}, etc... logic behind them is so much fun. I also love making commits, merging branches, solving merge conflicts. Unfortunately these activities usually require that I create classes which I find tedious and monotonous. The simple action of defining properties, is getting in the way of me writing the logic on what to do with those properties. I can't be alone here there has to be a part of coding for everyone that they dread or at least severely dislike doing compared to other parts of coding. How do you deal with the code based tasks that you find tedious?

    Read the article

  • What is the Oracle Utilities Application Framework?

    - by Anthony Shorten
    The Oracle Utilities Application Framework is a reusable, scalable and flexible java based framework which allows other products to be built, configured and implemented in a standard way. Note: Even though the Framework is built in java it can be integrated with COBOL based extensions for backward compatibility. When Oracle Utilities Customer Care & Billing was migrated from V1 to V2, it was decided that the technical aspects of that product be separated to allow for reuse and independence from technical issues. The idea was that all the technical aspects would be concentrated in this separate product (i.e. a framework) and allow all products using the framework to concentrate on delivering superior functionality. The product was named the Oracle Utilities Application Framework (oufw is the product code). The technical components are contained in the Oracle Utilities Application Framework which can be summarized as follows: Metadata - The Oracle Utilities Application Framework is responsible for defining and using the metadata to define the runtime behavior of the product. All the metadata definition and management is contained within the Oracle Utilities Application Framework. UI Management - The Oracle Utilities Application Framework is responsible for defining and rendering the pages and responsible for ensuring the pages are in the appropriate format for the locale. Integration - The Oracle Utilities Application Framework is responsible for providing the integration points to the architecture. Refer to the Oracle Utilities Application Framework Integration Overview for more details Tools - The Oracle Utilities Application Framework provides a common set of facilities and tools that can be used across all products. Technology - The Oracle Utilities Application Framework is responsible for all technology standards compliance, platform support and integration. There are a number of products from the Tax and Utilities Global Business Unit as well as from the Financial Services Global Business Unit that are built upon the Oracle Utilities Application Framework. These products require the Oracle Utilities Application Framework to be installed first and then the product itself installed onto the framework to complete the installation process. There are a number of key benefits that the Oracle Utilities Application Framework provides to these products: Common facilities - The Oracle Utilities Application Framework provides a standard set of technical facilities that mean that products can concentrate in the unique aspects of their markets rather than making technical decisions. Common methods of configuration - The Oracle Utilities Application Framework standardizes the technical configuration process for a product. Customers can effectively reuse the configuration process across products. Multi-lingual and Multi-platform - The Oracle Utilities Application Framework allows the products to be offered in more markets and across multiple platforms for maximized flexibility. Common methods of implementation - The Oracle Utilities Application Framework standardizes the technical aspects of a product implementation. Customers can effectively reuse the technical implementation process across products. Quicker adoption of new technologies - As new technologies and standards are identified as being important for the product line, they can be integrated centrally benefiting multiple products. Cross product reuse - As enhancements to the Oracle Utilities Application Framework are identified by a particular product, all products can potentially benefit from the enhancement. Note: Use of the Oracle Utilities Application Framework does not preclude the introduction of product specific technologies or facilities to satisfy market needs. The framework minimizes the need and assists in the quick integration of a new product specific piece of technology (if necessary). The Framework is not available as a product itself and is bundled with Tax and Utilities Global Business Unit prodicts. At the present time the following products are on the Framework: Oracle Utilities Customer Care And Billing (V2 and above) Oracle Enterprise Taxation Management (V2 and above) Oracle Utilities Business Intelligence (V2 and above) Oracle Utilities Mobile Workforice Management (V2 and above)

    Read the article

  • Throttling in OSB

    - by Knut Vatsendvik
    Technorati Tags: soa,integration,osb,throttling,overload protection A common problem with integration is the risk of overloading a particular web service. When the capacity of a web service is reached and it continues to accept connections, it will most likely start to deteriorate. Fortunately there are 2 techniques, with Oracle Service Bus, that you can apply for protecting this from happening. You can either limit the concurrent number of requests for a Business Service (outbound requests) or you can limit the number of threads processing the requests for a Proxy Service (inbound requests). Limiting the Concurrent Number of Requests Limiting the concurrent requests for a Business Service cannot be set at design time so you have to use the built-in Oracle Service Bus Administration Console to do it (/sbconsole). Follow these steps to enable it: In Change Center, click Create to start a new Session Select Project Explorer, and navigate to the Business Service you want to limit Select the Operational Settings tab of the View a Business Service page In this tab, under Throttling, select the Enable check box. By enabling throttling you Specify a value for Maximum Concurrency Specify a positive integer value for Throttling Queue to backlog messages that has exceeded the message concurrency limit Specify the maximum time in milliseconds for Message Expiration a message can spend in Throttling Queue Click Update Click Active in Change Center to active the new settings If you re-publish the service, it will not overwrite the settings. Only if the resource is renamed or moved, it will. Please note that a throttling queue is an in-memory queue. Messages that are placed in this queue are not recoverable when a server fails or when you restart a server. Limiting the Number of Threads A better approach, in my opinion, is to limit the number of threads that can work with request. Follow these steps to do it: Open the WebLogic Server Console (/console) In Change Center, click Create to start a new Session In the left pane expand Environment and select Work Managers In the Global Work Managers page, click New    Click the Work Manager radio button, then click Next Enter a Name for the new Work Manager, and click Next In the Available Targets list, select server instances or clusters on which you will deploy applications that reference the Work Manager Click Finish. The new Work Manager now appears in the Global Work Managers page. Select the new Work Manager Right next to the Maximum Threads Constraint drop-down box, click New   Click the Maximum Threads Constraint radio button, then click Next Enter a Name and a thread Count to be the maximum size to allocate for requests. Click Next  In the Available Targets list, select server instances or clusters on which you will deploy applications that reference the Work Manager Click Finish Click Save Click Active in Change Center to active your changes.  A restart may be necessary.   Puh! Almost there. Start a new session. Go to the Service Bus Console (/sbconsole) and find your consuming Proxy Service. Click the Edit button of the Transport Configuration tab. Click Next Set the Dispatch Policy to the new Work Manager Click Last Click Save Click Active in Change Center to active your changes. 

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >