Search Results

Search found 20321 results on 813 pages for 'mobile applications'.

Page 66/813 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • Official and unofficial apps in the iOS, WP7, and Android marketplaces

    - by Bil Simser
    The last few months have seen people complaining about the lack of "official" apps in the Windows Phone marketplace. In fact a couple of months ago I wrote about this very thing here and if we really needed these official apps or could get by with third-party solutions. Recently a list of "Top 100 Mobile Apps" crossed my desk and it was curious. 40 iPhone apps, 40 Android apps, 10 WP7 apps, and 10 BlackBerry apps. Really? 10 for WP7? So I wondered if the media was just playing this up and maybe continuing to do what I think most vendors are doing which is treating Windows Phone as the red-headed step-child you keep in the basement while all along there's nothing wrong with them. I put together the list and went digging to see how many of the top 40 iOS and Android apps were also on the Windows Phone platform (sorry BlackBerry, you should just shut your doors right now). Here's the results. Note, these are all *free* apps. There might be other pay apps that have official representation across all mobile devices, I just chose to hunt these ones down because I'm cheap. In the top 40, I easily plucked out 20 that had official apps on all three platforms. These were: Amazon Mobile, ESPN Score Centre, Evernote, Facebook, Foursquare, Google Search, IMDB, Kindle, Shazam, Skype (yes, I know, in beta on WP7), SlackerRadio, The Weather Channel, TripIt, Twitter, Yelp, Flixster, Netflix, TuneIn Radio, Dictionary.com, Angry Birds, and Groupon. Hey, that's pretty good IMHO. 20 or so apps, all free, and all fully functional and supported (and in some cases, even better looking on the Windows Phone platform than the other platforms). A dozen or so more apps had official apps on some platforms but not all, so yes, there are gaps here. Here's a rundown of the hangers-on: Adobe Photoshop Express This looks great on the iOS platform and there's even an official version on droid. Hope Adobe brings this to WP7. There are other photo editing programs though if you go looking (maybe we can get Paint.NET to be ported to the phone?) BBC News A few apps offer news feeds but nothing official on the Windows Phone. The feeds are good but without video this app needs some WP7 love. Dropbox Again Windows Phone looses out here with no official app. There are a few third party ones that will help you along and offer most of the functionality that you need but no integration that an official app might bring. Epicurious Droid seems to be the trailer here as there are apps for it but nothing official (from what I can tell). Both iOS and WP7 have them. Flipboard It's sad with Flipboard as it's such a great newsreader. The only offiical app is for iOS but frankly the iPhone version looks horrible so without a tablet the experience here isn't that hot. Maybe with WP8. Currently there's nothing even remotely similar to this on the other platforms. Google+ Is anyone still using this? No official app for WP7 but some clones. Apparently there's no API so people are just screen scraping. Ugh. Mint.com This app has all kinds of buzz and a lot of votes on the application requests site. Official apps for iOS and droid. No WP7 love (yet). TED Quite a few TED apps on WP7 but nothing official. I think the third party ones suffice and some are pretty nice looking, taking advantage of the Metro interface and making for a good show. WebMD There's a third party app on WP7 here but nothing official. It seems to contain all the same information and functionality the official apps do so not sure if an official one is needed but its here for inclusion. The other apps in the top 40 were either very specific to the platform (for example all three of them have a "Find my Phone" app). There are others that are missing out on the WP7 platform like ooVoo, Words With Friends, and some of the Google apps (Google Voice for example). Since you can integrate your GMail account right into the Windows Phone (via linked inboxes) I'm not sure if there's a need for an official GMail app here. Looking at the numbers Windows Phone still gets the worst of the deal here with half a dozen highly popular "offical" apps that exist on the other mobile platforms and in some cases, nothing even remotely similar to the official app to compare. This doesn't include things like Instagram, PInterest, and others (don't get me started on those). Still, with over 20+ highly popular free apps all represented on all three mobile platforms I don't think it's a bad place to be in. The Windows Phone platform could get a little more love from the vendors missing here, or at least open up your APIs so the third party crowd can step in and take up the slack. P.S. these are just my observations and I might have got a few items wrong. Feel free to chime in with missing or incorrect information. I am after all human. Well, most of me is.

    Read the article

  • Android Developers, Are you adding APP 2 SD in a future app release and if so for which applications

    - by Anthony
    For Android application developers regarding 2.2 and the new App 2 SD feature. Android 2.2 now allows you to have your applications installed onto the SD card instead of using the phones internal memory. Will any of you be adding this feature onto your next release and if so what is your app? I know applications built with the App 2 SD function cannot be used when the SD card is mounted. Maybe 2 versions of each app on the market would work out great for those that would need an app while the phone is mounted. What do you think about this idea? Are you aware of any other negative issues that arise from an application built for this feature?

    Read the article

  • Starting multiple applications in Ubuntu Unity

    - by Black
    I would like to start multiple GUI applications with a single script or command in Ubuntu 12. By now, I have a shell script that starts an application in the foreground and waits for the termination of the application afterwards starts several applications (like browser, mailer, IRC client) in the background The script is working, however all the applications are getting the same icon and are treated like different windows of one application, i.e. the script. Is there a way to start applications from a script, that makes Unity display the icons of the applications, e.g. the Thunderbird icon, instead of a single default icon for the script? The script looks like this: ! /bin/bash wait for termination... /usr/bin/libreoffice path/to/document in background /usr/bin/thunderbird & /usr/bin/pidgin &

    Read the article

  • SEOs: mobile version using AJAX: how to be properly read by SEOs?

    - by Olivier Pons
    Before anything else, I'd like to emphasize that I've already read this and this. Here's what I can do: Choice (1): create classical Web version with all products in that page - http://www.myweb.com. create mobile Web version with all products in the page and use jQuery Mobile to format all nicely. But this may be long to (load + format), and may provide bad user experience - http://m.myweb.com. Choice (2): create classical Web version with all products in that page create mobile Web version with almost nothing but a Web page showing "wait", then download all products in the page using AJAX and use jQuery Mobile to format all nicely. Showing a "wait, loading" message gives far more time to do whatever I want and may provide better user experience - http://m.myweb.com. Question: if I choose solution (2), google won't read anything on the mobile version (because all products will be downloaded in the page using AJAX), so it wont be properly read by SEOs. What / how shall I do?

    Read the article

  • Windows Phone : Tango augmente le nombre d'applications tournant en multitâche et pourrait être baptisée Refresh

    Windows Phone : Tango augmente le nombre d'applications tournant en multitâche la mise à jour pourrait être baptisée Refresh et dévoilée en Chine ce jour Mise à jour du 21/03/2012 Windows Phone Tango, la prochaine mise à jour mineure du système d'exploitation mobile de Microsoft pourrait avoir pour nom officiel Windows Phone Refresh. Selon des rumeurs, le lancement de Tango pourrait avoir lieu ce jour en Chine. Microsoft aurait envoyé des invitations à la presse chinoise pour un événement le 21 mars. Pour rappel, cette mise à jour équipera principalement les terminaux bas de gamme et permettra à Microso...

    Read the article

  • What is the best practice for website design and markup now that mobile browsers are common?

    - by Jonathan Drain
    Back in 2008, smartphones were a small market and it was commonplace for sites to be designed for a fixed width - say, 900px or 960px - with the page centered if the browser window was larger. Many designers said fluid width was better, but since user screens typically varied between 1024x768 and 1920x1080, fluid width allowed longer line length than is optimal for ease of reading, and so many sites (including Stack Exchange) use fixed width. Now that mobile devices are common, what is the the best approach to support both desktop and mobile browsers? Establish a separate mobile site (e.g: mobile.example.com) Serve a different CSS to mobile devices; if so how? Server-side browser sniffing, or a @media rule? Use Javascript or something to adapt the website dynamically to the client? Should all websites be expected to be responsive? Some kind of fluid layout Something else?

    Read the article

  • Partner Webcast – Oracle Coherence Applications on WebLogic 12c Grid - 21st Nov 2013

    - by Roxana Babiciu
    Oracle Coherence is the industry leading in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data. As data volumes and customer expectations increase, driven by the “internet of things”, social, mobile, cloud and always-connected devices, so does the need to handle more data in real-time, offload over-burdened shared data services and provide availability guarantees.The latest release of Oracle Coherence 12c comes with great improvements in ease of use, integration and RASP (Reliability, Availability, Scalability, and Performance) areas. In addition it features an innovating approach to build and deploy Coherence Application as an integral part of typical JEE Enterprise Application.Read more here

    Read the article

  • How can I sign a Windows Mobile application for internal use?

    - by AR
    I'm developing a Windows Mobile application for internal company use, using the Windows Mobile 6 Professional SDK. Same old story: I've developed and tested on the emulator and all is well, but as soon as I deploy to advice I get an UnauthorizedAccessException when writing files or creating directories. I'm aware that an application installed to a device needs to be signed but I'm running into roadblocks at every turn: Using the project properties 'Devices' window I select 'Sign the project output with this certificate, and choose one of the sample certificates from the SDK. This results in a build error: "The signer's certificate is not valid for signing" when running SignTool. If I try to run SignTool.exe from the commandline, I get an error telling me to run SignTool.exe from a location in the system's PATH. I can't use the 'Signing' tab in the Project Properties to create a test certificate - this is greyed out (presumably for WinMobile projects?). If at all possible, I would like to avoid having to go through Versign or the like to get a Mobile2Market certificate. If I have to go this route for a final version that's fine, but I need to at least be able to test the app on real devices. Any advice would be most welcome!

    Read the article

  • Jquery Mobile is adding text above my "<input type=submit"> automatically, how do i prevent this?

    - by Jack Dalton
    I've just begin work on a mobile version for one of my sites. I've set up my sign up form for my users. It worked fine and the CSS styled it correctly. @using (Html.BeginForm("XXX", "Registration", FormMethod.Post, new { @class = "twitter-sign-in-container" })) { <input type="submit" name="twitter-button" value="Sign in with Twitter" id="twitter-button" /> } Once I added Jquery mobile to the project if found that random unstyled text started to show up. On inspection I found that all input submits where being wrapped in new tags and adding un tagged text == to the inputs "Value": <form action="/registration/xxx" class="twitter-sign-in-container" method="post"> <div class="ui-btn ui-input-btn ui-corner-all ui-shadow"> "Sign in with Twitter" <input type="submit" name="twitter-button" value="Sign in with Twitter" id="twitter-button"> </div> </form> Does anyone have any clue as to why the "sign up with twitter" text is being added, and how i stop it? P.S Less important but I'd also like to know why Jquery wraps form contents in the bellow div.

    Read the article

  • Creating STA COM compatible ASP.NET Applications

    - by Rick Strahl
    When building ASP.NET applications that interface with old school COM objects like those created with VB6 or Visual FoxPro (MTDLL), it's extremely important that the threads that are serving requests use Single Threaded Apartment Threading. STA is a COM built-in technology that allows essentially single threaded components to operate reliably in a multi-threaded environment. STA's guarantee that COM objects instantiated on a specific thread stay on that specific thread and any access to a COM object from another thread automatically marshals that thread to the STA thread. The end effect is that you can have multiple threads, but a COM object instance lives on a fixed never changing thread. ASP.NET by default uses MTA (multi-threaded apartment) threads which are truly free spinning threads that pay no heed to COM object marshaling. This is vastly more efficient than STA threading which has a bit of overhead in determining whether it's OK to run code on a given thread or whether some sort of thread/COM marshaling needs to occur. MTA COM components can be very efficient, but STA COM components in a multi-threaded environment always tend to have a fair amount of overhead. It's amazing how much COM Interop I still see today so while it seems really old school to be talking about this topic, it's actually quite apropos for me as I have many customers using legacy COM systems that need to interface with other .NET applications. In this post I'm consolidating some of the hacks I've used to integrate with various ASP.NET technologies when using STA COM Components. STA in ASP.NET Support for STA threading in the ASP.NET framework is fairly limited. Specifically only the original ASP.NET WebForms technology supports STA threading directly via its STA Page Handler implementation or what you might know as ASPCOMPAT mode. For WebForms running STA components is as easy as specifying the ASPCOMPAT attribute in the @Page tag:<%@ Page Language="C#" AspCompat="true" %> which runs the page in STA mode. Removing it runs in MTA mode. Simple. Unfortunately all other ASP.NET technologies built on top of the core ASP.NET engine do not support STA natively. So if you want to use STA COM components in MVC or with class ASMX Web Services, there's no automatic way like the ASPCOMPAT keyword available. So what happens when you run an STA COM component in an MTA application? In low volume environments - nothing much will happen. The COM objects will appear to work just fine as there are no simultaneous thread interactions and the COM component will happily run on a single thread or multiple single threads one at a time. So for testing running components in MTA environments may appear to work just fine. However as load increases and threads get re-used by ASP.NET COM objects will end up getting created on multiple different threads. This can result in crashes or hangs, or data corruption in the STA components which store their state in thread local storage on the STA thread. If threads overlap this global store can easily get corrupted which in turn causes problems. STA ensures that any COM object instance loaded always stays on the same thread it was instantiated on. What about COM+? COM+ is supposed to address the problem of STA in MTA applications by providing an abstraction with it's own thread pool manager for COM objects. It steps in to the COM instantiation pipeline and hands out COM instances from its own internally maintained STA Thread pool. This guarantees that the COM instantiation threads are STA threads if using STA components. COM+ works, but in my experience the technology is very, very slow for STA components. It adds a ton of overhead and reduces COM performance noticably in load tests in IIS. COM+ can make sense in some situations but for Web apps with STA components it falls short. In addition there's also the need to ensure that COM+ is set up and configured on the target machine and the fact that components have to be registered in COM+. COM+ also keeps components up at all times, so if a component needs to be replaced the COM+ package needs to be unloaded (same is true for IIS hosted components but it's more common to manage that). COM+ is an option for well established components, but native STA support tends to provide better performance and more consistent usability, IMHO. STA for non supporting ASP.NET Technologies As mentioned above only WebForms supports STA natively. However, by utilizing the WebForms ASP.NET Page handler internally it's actually possible to trick various other ASP.NET technologies and let them work with STA components. This is ugly but I've used each of these in various applications and I've had minimal problems making them work with FoxPro STA COM components which is about as dififcult as it gets for COM Interop in .NET. In this post I summarize several STA workarounds that enable you to use STA threading with these ASP.NET Technologies: ASMX Web Services ASP.NET MVC WCF Web Services ASP.NET Web API ASMX Web Services I start with classic ASP.NET ASMX Web Services because it's the easiest mechanism that allows for STA modification. It also clearly demonstrates how the WebForms STA Page Handler is the key technology to enable the various other solutions to create STA components. Essentially the way this works is to override the WebForms Page class and hijack it's init functionality for processing requests. Here's what this looks like for Web Services:namespace FoxProAspNet { public class WebServiceStaHandler : System.Web.UI.Page, IHttpAsyncHandler { protected override void OnInit(EventArgs e) { IHttpHandler handler = new WebServiceHandlerFactory().GetHandler( this.Context, this.Context.Request.HttpMethod, this.Context.Request.FilePath, this.Context.Request.PhysicalPath); handler.ProcessRequest(this.Context); this.Context.ApplicationInstance.CompleteRequest(); } public IAsyncResult BeginProcessRequest( HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } } public class AspCompatWebServiceStaHandlerWithSessionState : WebServiceStaHandler, IRequiresSessionState { } } This class overrides the ASP.NET WebForms Page class which has a little known AspCompatBeginProcessRequest() and AspCompatEndProcessRequest() method that is responsible for providing the WebForms ASPCOMPAT functionality. These methods handle routing requests to STA threads. Note there are two classes - one that includes session state and one that does not. If you plan on using ASP.NET Session state use the latter class, otherwise stick to the former. This maps to the EnableSessionState page setting in WebForms. This class simply hooks into this functionality by overriding the BeginProcessRequest and EndProcessRequest methods and always forcing it into the AspCompat methods. The way this works is that BeginProcessRequest() fires first to set up the threads and starts intializing the handler. As part of that process the OnInit() method is fired which is now already running on an STA thread. The code then creates an instance of the actual WebService handler factory and calls its ProcessRequest method to start executing which generates the Web Service result. Immediately after ProcessRequest the request is stopped with Application.CompletRequest() which ensures that the rest of the Page handler logic doesn't fire. This means that even though the fairly heavy Page class is overridden here, it doesn't end up executing any of its internal processing which makes this code fairly efficient. In a nutshell, we're highjacking the Page HttpHandler and forcing it to process the WebService process handler in the context of the AspCompat handler behavior. Hooking up the Handler Because the above is an HttpHandler implementation you need to hook up the custom handler and replace the standard ASMX handler. To do this you need to modify the web.config file (here for IIS 7 and IIS Express): <configuration> <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated-4.0" /> <add name="Asmx STA Web Service Handler" path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" precondition="integrated"/> </handlers> </system.webServer> </configuration> (Note: The name for the WebServiceHandlerFactory-Integrated-4.0 might be slightly different depending on your server version. Check the IIS Handler configuration in the IIS Management Console for the exact name or simply remove the handler from the list there which will propagate to your web.config). For IIS 5 & 6 (Windows XP/2003) or the Visual Studio Web Server use:<configuration> <system.web> <httpHandlers> <remove path="*.asmx" verb="*" /> <add path="*.asmx" verb="*" type="FoxProAspNet.WebServiceStaHandler" /> </httpHandlers> </system.web></configuration> To test, create a new ASMX Web Service and create a method like this: [WebService(Namespace = "http://foxaspnet.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class FoxWebService : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World. Threading mode is: " + System.Threading.Thread.CurrentThread.GetApartmentState(); } } Run this before you put in the web.config configuration changes and you should get: Hello World. Threading mode is: MTA Then put the handler mapping into Web.config and you should see: Hello World. Threading mode is: STA And you're on your way to using STA COM components. It's a hack but it works well! I've used this with several high volume Web Service installations with various customers and it's been fast and reliable. ASP.NET MVC ASP.NET MVC has quickly become the most popular ASP.NET technology, replacing WebForms for creating HTML output. MVC is more complex to get started with, but once you understand the basic structure of how requests flow through the MVC pipeline it's easy to use and amazingly flexible in manipulating HTML requests. In addition, MVC has great support for non-HTML output sources like JSON and XML, making it an excellent choice for AJAX requests without any additional tools. Unlike WebForms ASP.NET MVC doesn't support STA threads natively and so some trickery is needed to make it work with STA threads as well. MVC gets its handler implementation through custom route handlers using ASP.NET's built in routing semantics. To work in an STA handler requires working in the Page Handler as part of the Route Handler implementation. As with the Web Service handler the first step is to create a custom HttpHandler that can instantiate an MVC request pipeline properly:public class MvcStaThreadHttpAsyncHandler : Page, IHttpAsyncHandler, IRequiresSessionState { private RequestContext _requestContext; public MvcStaThreadHttpAsyncHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); _requestContext = requestContext; } public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { return this.AspCompatBeginProcessRequest(context, cb, extraData); } protected override void OnInit(EventArgs e) { var controllerName = _requestContext.RouteData.GetRequiredString("controller"); var controllerFactory = ControllerBuilder.Current.GetControllerFactory(); var controller = controllerFactory.CreateController(_requestContext, controllerName); if (controller == null) throw new InvalidOperationException("Could not find controller: " + controllerName); try { controller.Execute(_requestContext); } finally { controllerFactory.ReleaseController(controller); } this.Context.ApplicationInstance.CompleteRequest(); } public void EndProcessRequest(IAsyncResult result) { this.AspCompatEndProcessRequest(result); } public override void ProcessRequest(HttpContext httpContext) { throw new NotSupportedException("STAThreadRouteHandler does not support ProcessRequest called (only BeginProcessRequest)"); } } This handler code figures out which controller to load and then executes the controller. MVC internally provides the information needed to route to the appropriate method and pass the right parameters. Like the Web Service handler the logic occurs in the OnInit() and performs all the processing in that part of the request. Next, we need a RouteHandler that can actually pick up this handler. Unlike the Web Service handler where we simply registered the handler, MVC requires a RouteHandler to pick up the handler. RouteHandlers look at the URL's path and based on that decide on what handler to invoke. The route handler is pretty simple - all it does is load our custom handler: public class MvcStaThreadRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { if (requestContext == null) throw new ArgumentNullException("requestContext"); return new MvcStaThreadHttpAsyncHandler(requestContext); } } At this point you can instantiate this route handler and force STA requests to MVC by specifying a route. The following sets up the ASP.NET Default Route:Route mvcRoute = new Route("{controller}/{action}/{id}", new RouteValueDictionary( new { controller = "Home", action = "Index", id = UrlParameter.Optional }), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute);   To make this code a little easier to work with and mimic the behavior of the routes.MapRoute() functionality extension method that MVC provides, here is an extension method for MapMvcStaRoute(): public static class RouteCollectionExtensions { public static void MapMvcStaRoute(this RouteCollection routeTable, string name, string url, object defaults = null) { Route mvcRoute = new Route(url, new RouteValueDictionary(defaults), new MvcStaThreadRouteHandler()); RouteTable.Routes.Add(mvcRoute); } } With this the syntax to add  route becomes a little easier and matches the MapRoute() method:RouteTable.Routes.MapMvcStaRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); The nice thing about this route handler, STA Handler and extension method is that it's fully self contained. You can put all three into a single class file and stick it into your Web app, and then simply call MapMvcStaRoute() and it just works. Easy! To see whether this works create an MVC controller like this: public class ThreadTestController : Controller { public string ThreadingMode() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Try this test both with only the MapRoute() hookup in the RouteConfiguration in which case you should get MTA as the value. Then change the MapRoute() call to MapMvcStaRoute() leaving all the parameters the same and re-run the request. You now should see STA as the result. You're on your way using STA COM components reliably in ASP.NET MVC. WCF Web Services running through IIS WCF Web Services provide a more robust and wider range of services for Web Services. You can use WCF over HTTP, TCP, and Pipes, and WCF services support WS* secure services. There are many features in WCF that go way beyond what ASMX can do. But it's also a bit more complex than ASMX. As a basic rule if you need to serve straight SOAP Services over HTTP I 'd recommend sticking with the simpler ASMX services especially if COM is involved. If you need WS* support or want to serve data over non-HTTP protocols then WCF makes more sense. WCF is not my forte but I found a solution from Scott Seely on his blog that describes the progress and that seems to work well. I'm copying his code below so this STA information is all in one place and quickly explain. Scott's code basically works by creating a custom OperationBehavior which can be specified via an [STAOperation] attribute on every method. Using his attribute you end up with a class (or Interface if you separate the contract and class) that looks like this: [ServiceContract] public class WcfService { [OperationContract] public string HelloWorldMta() { return Thread.CurrentThread.GetApartmentState().ToString(); } // Make sure you use this custom STAOperationBehavior // attribute to force STA operation of service methods [STAOperationBehavior] [OperationContract] public string HelloWorldSta() { return Thread.CurrentThread.GetApartmentState().ToString(); } } Pretty straight forward. The latter method returns STA while the former returns MTA. To make STA work every method needs to be marked up. The implementation consists of the attribute and OperationInvoker implementation. Here are the two classes required to make this work from Scott's post:public class STAOperationBehaviorAttribute : Attribute, IOperationBehavior { public void AddBindingParameters(OperationDescription operationDescription, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.ClientOperation clientOperation) { // If this is applied on the client, well, it just doesn’t make sense. // Don’t throw in case this attribute was applied on the contract // instead of the implementation. } public void ApplyDispatchBehavior(OperationDescription operationDescription, System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation) { // Change the IOperationInvoker for this operation. dispatchOperation.Invoker = new STAOperationInvoker(dispatchOperation.Invoker); } public void Validate(OperationDescription operationDescription) { if (operationDescription.SyncMethod == null) { throw new InvalidOperationException("The STAOperationBehaviorAttribute " + "only works for synchronous method invocations."); } } } public class STAOperationInvoker : IOperationInvoker { IOperationInvoker _innerInvoker; public STAOperationInvoker(IOperationInvoker invoker) { _innerInvoker = invoker; } public object[] AllocateInputs() { return _innerInvoker.AllocateInputs(); } public object Invoke(object instance, object[] inputs, out object[] outputs) { // Create a new, STA thread object[] staOutputs = null; object retval = null; Thread thread = new Thread( delegate() { retval = _innerInvoker.Invoke(instance, inputs, out staOutputs); }); thread.SetApartmentState(ApartmentState.STA); thread.Start(); thread.Join(); outputs = staOutputs; return retval; } public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state) { // We don’t handle async… throw new NotImplementedException(); } public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result) { // We don’t handle async… throw new NotImplementedException(); } public bool IsSynchronous { get { return true; } } } The key in this setup is the Invoker and the Invoke method which creates a new thread and then fires the request on this new thread. Because this approach creates a new thread for every request it's not super efficient. There's a bunch of overhead involved in creating the thread and throwing it away after each thread, but it'll work for low volume requests and insure each thread runs in STA mode. If better performance is required it would be useful to create a custom thread manager that can pool a number of STA threads and hand off threads as needed rather than creating new threads on every request. If your Web Service needs are simple and you need only to serve standard SOAP 1.x requests, I would recommend sticking with ASMX services. It's easier to set up and work with and for STA component use it'll be significantly better performing since ASP.NET manages the STA thread pool for you rather than firing new threads for each request. One nice thing about Scotts code is though that it works in any WCF environment including self hosting. It has no dependency on ASP.NET or WebForms for that matter. STA - If you must STA components are a  pain in the ass and thankfully there isn't too much stuff out there anymore that requires it. But when you need it and you need to access STA functionality from .NET at least there are a few options available to make it happen. Each of these solutions is a bit hacky, but they work - I've used all of them in production with good results with FoxPro components. I hope compiling all of these in one place here makes it STA consumption a little bit easier. I feel your pain :-) Resources Download STA Handler Code Examples Scott Seely's original STA WCF OperationBehavior Article© Rick Strahl, West Wind Technologies, 2005-2012Posted in FoxPro   ASP.NET  .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Database Trends & Applications column: Database Benchmarking from A to Z

    - by KKline
    Have you heard of the monthly print and web magazine Database Trends & Applications (DBTA)? Did you know I'm the regular columnist covering SQL Server ? For the past six months, I've been writing a series of articles about database benchmarking culminating in the latest article discussing my three favorite database benchmarking tools: the free, open-source HammerDB, the native SQL Server Distributed Replay Utility, and the commercial Benchmark Factory from Dell / Quest Software. Wondering what...(read more)

    Read the article

  • State management using the Application class in ASP.Net applications

    - by nikolaosk
    I have explained some of the state mechanisms that we have in our disposal for preserving state in ASP.Net applications in various posts in this blog. You can have a look at this post , this post , this post and this one . I have not presented yet an example in using the Application class/object for preserving state within our application. Application state is available globally in an application.The way we access Application State is through the HttpApplication object's Application property. Let...(read more)

    Read the article

  • Hosting StreamInsight applications using WCF

    - by gsusx
    One of the fundamental differentiators of Microsoft's StreamInsight compared to other Complex Event Processing (CEP) technologies is its flexible deployment model. In that sense, a StreamInsight solution can be hosted within an application or as a server component. This duality contrasts with most of the popular CEP frameworks in the current market which are almost exclusively server based. Whether it's undoubtedly that the ability of embedding a CEP engine in your applications opens new possibilities...(read more)

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Is CodeFirst intended for large scale applications?

    - by RoboShop
    I've been reading up on Entity Framework, in particular, EF 4.1 and following this link ( http://weblogs.asp.net/scottgu/archive/2010/07/16/code-first-development-with-entity-framework-4.aspx) and it's guide on Code First. I find it neat but I was wondering, is Code First supposed to be just a solution for rapid development where you can just jump right in without much planning or is it actually intended to be used for large scale applications?

    Read the article

  • 5 Mac Applications For Web And Graphic Design

    - by Jyoti
    In this article free applications useful and effective for the development and creation of websites with your Mac computer. Without further ado, here are 5 Excellent Mac Application for Web and Graphic Design. Fotoflexer : Fotoflexer claims to be “The world’s most advanced online image editor”. It offers completely free access to numerous features such as [...]

    Read the article

  • Skechers Leverages Oracle Applications, Business Intelligence and On Demand Offerings to Drive Long-Term Growth

    - by user801960
    This month Oracle Retail in the USA announced that Skechers - a world leading lifestyle footwear retailer - would be adopting several Oracle Retail products as part of their global growth strategy and to maximise business efficiency.  While based primarily in the USA, Skechers is a respected retailer across the world and has been an Oracle customer since 1997.  The key information about the announcement is below.  To find out more about Skechers visit their website: http://www.skechers.com/  Skechers U.S.A. Inc., an award-winning global leader in the lifestyle footwear industry, has upgraded and expanded its Oracle® Applications investment, implemented Oracle Database and moved to Oracle On Demand, Oracle’s premier cloud service to support rapid growth across its retail and wholesale channels. The new business information systems are part of a larger initiative for the billion-dollar-plus footwear company to fuel growth, reduce total cost of ownership and enable the business to respond faster to market opportunities. With more than 3,000 styles of shoes to design, develop and market, Skechers upgraded to Oracle’s PeopleSoft Enterprise Financial Management and PeopleSoft Supply Chain Management to increase operational efficiencies and improve controls by establishing an integrated, industry-specific platform. An Oracle customer since 1997, Skechers implemented PeopleSoft Enterprise Real Estate Management to meet the rapid growth of its retail stores worldwide. The company is the first customer to go live on the Real Estate Management module and worked closely with Oracle to provide development insight. Skechers also implemented Oracle Fusion Governance, Risk, and Compliance applications. This deployment enabled the company to leverage its existing corporate governance and compliance efforts throughout the global enterprise and more effectively manage the audit processes across multiple business units, processes and systems while reducing audit costs. Next, Skechers leveraged Oracle Financial Analytics, a pre-built Oracle Business Intelligence Application and PeopleSoft Enterprise Project Costing and PeopleSoft Enterprise Contracts to develop a custom Royalty Management dashboard, providing managers with better financial visibility to the company’s licensing contracts. The company switched to Oracle Database and moved database hosting and management to Oracle On Demand to reduce maintenance, implementation and system administration costs. As a result, Skechers is also achieving a better response time and is delivering a higher level of 24x7 support. OSI Consulting, a Platinum partner in Oracle PartnerNetwork (OPN), provided implementation and integration services to Skechers.   To view the full announcement please click here

    Read the article

  • Slow Chat with Industry Experts: Developing Multithreaded Applications

    Sponsored by Intel Join the experts who created The Intel Guide for Developing Multithreaded Applications for a slow chat about multithreaded application development. Bring your questions about application threading, memory management, synchronization, programming tools and more and get answers from the parallel programming experts. Post your questions here

    Read the article

  • Slow Chat with Industry Experts: Developing Multithreaded Applications

    Sponsored by Intel Join the experts who created The Intel Guide for Developing Multithreaded Applications for a slow chat about multithreaded application development. Bring your questions about application threading, memory management, synchronization, programming tools and more and get answers from the parallel programming experts. Post your questions here

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >