Search Results

Search found 13889 results on 556 pages for 'results'.

Page 413/556 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Where does Firefox store cerificates and how to delete one?

    - by majid4466
    Hi all, The root cause of my problem is not known to me, whatever it is, I experience frequent DNS failures. When it happens I cannot browse to my Gmail inbox. I use two DNS settings. One is the public DNS server offered by OpenDNS, and the other is Google's free DNS server. When this happens I switch from the active setting to the other one and the problem goes away. But there is a side effect to this. When browsing to Gmail fails to load, after switching the DNS I receive an error saying the security certificate the site uses is only valid for OpenDNS. This my wild guess at what is going on: 1. OpenDNS fails to resolve mail.google.com to its IP, 2. My ISP sends me a page showing search results for 'mail.google.com' 3. Since I have received some sort of page instead of a timeout, the browser, mistakenly, binds the certificate it has cached for 'mail.google.com' to the new domain. This search page is not served by https so not exception is thrown by the wrong binding 4. After switching the DNS, the domain is correctly resolved to Gmail server's IP and since his is on https the handshake is triggered. 5. Now, because of the wrong binding, which passed quietly as no handshake was involved, I receive the error saying the certificate used by 'mail.google.com' is only good for openDNS I don't know much about DNS, less about https and the process of establishing a secure connection. How correct is my explanation? How can I delete the wrong association and/or the certificate? Thanks for listening. P. S. The problem goes away by itself, but sometimes it takes several hours before Gmail works again.

    Read the article

  • Iptables and system-config-firewall

    - by nivde92
    I had a set of netfilter rules set with iptables, but someone else told me to use system-config-firewall to add a rule for sharing files with Windows. (Samba) This rewrote the iptables rules file and I lost my own custom rules. I have a backup copy, but am having trouble restoring them. Edit: The server is Centos, I already tried to restore the rules with iptables-restore < /root/working.iptables.rules but for some reason the rules don't change. What are you trying to do? Trying to restore the iptable rules that I have in a backup file. What have you tried in order to make it happen? I've tried to modify the iptables file with vim, since the command iptables-restore was no help. What results did you expect? To get the old rules back. What actually happened? Nothing, when I run the command or edit the file by hand the file doesn't change at all. Maybe something else it's overwriting.

    Read the article

  • where are the default ulimit values set? (linux, centos)

    - by nomercysir
    I have two CentOS (5) servers with nearly identical specs. When I login and do: ulimit -u on one machine, I get unlimited and on the other, 77824 When I run a cron like * * * * * ulimit -u > ulimit.txt I get the same results (unlimited, 77824). I am trying to determine where these are set so that I can alter them. They are not set in any of my profiles (.bashrc, /etc/profile, etc .. these wouldn't affect cron anyway) nor in /etc/security/limits.conf (which is empty). I have scoured google and even gone so far as to do grep -Ir 77824 / but nothing has turned up so far. I don't understand how these machines could have come preset with different limits. I am actually wondering not for these machines, but for a different (CentOS 6) machine which has a limit of 1024, which is far too small. I need to run cron jobs with a higher limit and the only way I know how to set that is in the cron job itself. That's ok, but I'd rather set it system wide so it's not as hacky. Thanks for any help. This seems like it should be easy (NOT) EDIT -- SOLVED Ok, I figured this out. It seems to be an issue either with CentOS 6 or perhaps my machine configuration. On the CentOS 5 configuration, I can set in /etc/security/limits.conf: * - nproc unlimited and that would effectively update the accounts and cron limits. However, this does not work in my CentOS 6 box. Instead, I must do: myname1 - nproc unlimited myname2 - nproc unlimited ... And things work as expected. Maybe the UID specification works to, but the wildcard (*) definitely DOES NOT here. Oddly, wildcards DO work for the 'nofile' limit. I still would love to know where the default values are actually coming from, because by default, this file is empty and I couldn't see why I had different defaults for the two CentOS boxes, which had identical hardware and were from the same provider.

    Read the article

  • Make eix available version match emerge

    - by Ryaner
    We have out Gentoo hosts using a binhost with EMERGE_DEFAULT_OPTS="--getbinpkgonly --usepkgonly" in the make.conf file so that the host only pulled down the binary hosts. All works well from that side. I use eix to check on software versions for upgrades but have hit a problem where eix will see an available version ahead of what is available on the binserver. Using glibc as an example ietpl [VE] / # emerge -s glibc Searching... [ Results for search key : glibc ] [ Applications found : 1 ] * sys-libs/glibc Latest version available: 2.14.1-r3 Latest version installed: 2.14.1-r3 Homepage: Description: GNU libc6 (also called glibc2) C library License: LGPL-2 Then eix reports a higher version available ietpl [VE] / # export LASTVERSION='{last}<version>{}' ietpl [VE] / # /usr/bin/eix --nocolor --format '<category> <name> [<installedversions:LASTVERSION>] [<bestversion:LASTVERSION>] \n' --exact --category-name sys-libs/glibc sys-libs glibc [2.14.1-r3] [2.15-r2] What I'm after is for eix to report the latest version available as 2.14.1-r3 like emerge. I've a feeling this is possible since without any formatting, eix returns Available versions: (2.2) ~2.9_p20081201-r3!s 2.10.1-r1!s 2.11.3!s ~2.12.1-r3!s 2.12.2!s{tbz2} ~2.13-r2!s 2.13-r4!s ~2.14!s ~2.14.1-r2!s 2.14.1-r3!s{tbz2} ~2.15-r1!s 2.15-r2!s ~2.15-r3!s **2.16.0!s **9999!s correctly tagging the latest unmasked binary package with {tbz2} I would have thought that the binary flag would do it, but that returns no matches --binary Match packages with *.tbz2 files.

    Read the article

  • Reasons for Ajax navigation breaking on Coldfusion/Apache when running an app in iOS fullscreen mode? [closed]

    - by frequent
    Not sure if this belongs to SO or here. I'm running a webApp using jquery, jquerymobile, requireJS and apache, coldfusion8, mysql 5.0.88 serverside. The app works fine until I try to run it in fullscreen mode on iOS (add icon to homescreen, launch app from there with <meta name="apple-mobile-web-app-capable" content="yes" /> specified). This meta tag will break the Jquery Mobile AJAX navigation. The AJAX request will fail and the requested page will be loaded as a new page, thereby restarting the app on every page change. I have chased this through the whole front end starting from requireJS through Jquery Mobiles AJAX navigation down to the AJAX request being made in Jquery. xhr.send( ( s.hasContent && s.data ) || null ); In regular browser this works no problem. In fullscreen mode, this fails (readystate=0, empty response). I have found this article, which argues that fullscreen mode is like a browser instance with different HTTP strings. On ASP.net this results in the browser not being identified by the server and only basic browser settings being assumed (e.g. no Javascript). I'm a little lost where to start looking for possible reasons serverside. I have not written any server code for handling Ajax page navigation, so this must be something that is handled out of the box by Coldfusion or Apache? Question: Where could I start looking for problable causes of fullscreen mode breaking AJAX navigation if I assume Coldfusion or Apache are the culprits? Is there a setting I'm missing in httpd.config? What else could be the problem? Thanks for inputs!

    Read the article

  • Copy past speed very slow for a large number of files on Windows [closed]

    - by Arno2501
    I've run the following test I've created a folder containing 15'000 files of 400 bytes using this batch : @ECHO off SET times=15000 FOR /L %%i IN (1,1,%times%) DO ( fsutil file createnew filename%%i.txt 400 ) then I copy past it on my Windows Computer using this command : robocopy LargeNumberOfFiles\ LargeNumberOfFiles2\ After it has completed I can see that the transfer rate was 915810 Bytes/sec this is less than 1 MB/s. It took me several seconds to copy 7 MBytes Please note that this is very slow. I've tried the same with a folder with a single file of 50 Mbytes and the transfer rate is 1219512195 Bytes/sec. (yeah GB/s) instantaneous. Why copying large number of files take so much time - ressources on a windows filesystem ? Please note that I've tried to do the same on a linux system which runs on the same computer in a virtual machine (vmware player) with ext3 filesystem. I use the cp command and the copy is instantaneous ! Please also note the following : no antivirus I've tested that behaviour on multiple windows computers (always ntfs) i always get comparable results (transfer rate under 1MB/s avg 7-8 seconds to copy 7 MBytes) I've tested on multiple linux ext3 system the copy is always instantaneous for that amount (15000 files of 400 bytes) The question is about understanding what makes windows filesystem so slow to copy large number of files compared to a linux one for instance.

    Read the article

  • Slow network interaction between 2 kvm host machines

    - by VirtualNoob
    We have 2 physical machines, Host1 is a CentOS 6.4 kvm host and hosts ~7 kvm VMs all running Ubuntu 12.04 - all of this runs perfectly. Recently we've added a 2nd host system, host2, again a CentOS 6.4 kvm host with a view to running another couple of VMs and providing some failover against host1 should it be required. Both physical machines reside in the same cabinet in our DC, and are on the same subnet - let's say host1: 1.1.1.64 and host2: 1.1.1.81. Both have their gateway set to the DC gateway of 1.1.1.254 with no hardware firewall in between. On each machine, I have 4 NICs that are bonded together to form a single interface, which is then bridged to allow the VMs to access the network. All of the VMs are online, and all of them can successfully ssh into the hosts without any delay. Both systems can access the internet fine, and I can ssh into both systems from home without any issues. However, there is a real delay when attempting to ssh from host1 to host2 (or vice versa) and this obviously means that any action required on host2, that is controlled by host1 either takes forever or results in failure due to timeout. In the interest of keeping this post short, I've put my ifcfg files into a pastie: http://pastie.org/8081648 I've tried both adding a firewall rule in each machine for the other, and also disabling the firewall entirely, so that can't be the issue. I've tried troubleshooting this myself but can't seem to get to the bottom of it. Any help or advice would be appreciated. Thanks in advance.

    Read the article

  • Refresh NFS mount

    - by HayekSplosives
    If I check an NFS share on a machine and ls I get the folders. If I got to the NFS host and add a new directory to /etc/exports for the client and do exportfs -a what do I run on the client to refresh the directories? Example (pseudoish): nfsNode01: echo "/share clientNode01 >> /etc/exports"; exportfs -a; clientNode01: cd /share; ls; nfsNode01: echo "/share/folder clientNode01 >> /etc/exports"; exportfs -a; clientNode01: ls; Results as still the same as above. If I reboot the /share/folder shares are there. I know there has to be a way to refresh that info from NFS. I'm sure if I let the connection wait long enough the next time I mounted /dsl would do it. Can I just umount/mount or is there a better way?

    Read the article

  • Provide a user with service start/stop permissions

    - by slakr007
    I have a very basic domain that I use for development. I want to create a GPO that provides users in the Backup Operators group with start/stop permissions for two specific services on a specific server. I have read several articles about this, and they all indicate that this is very easy. Create a GPO, give the user start/stop permissions to the services under Computer Configuration Policies Windows Settings Security Settings System Services, and voila. Done. Not so much, but I have to be doing something wrong. My install is pretty much the default. The domain controller is in the Domain Controllers OU, the Backup Operators group is under Builtin, and I created a user called Backup under Users. I created a GPO and linked it to the Domain Controllers OU. In the GPO I give the Backup user permission to start/stop two specific services on the server. I forced an update with gpupdate. I used Group Policy Results to verify that my GPO is the winning GPO giving the user the permission to start/stop the two services. However, the user is still unable to start/stop the services. I attempted different loopback settings on the GPO to no avail. I'm sort of at a loss here.

    Read the article

  • Custom ASP.NET Routing to an HttpHandler

    - by Rick Strahl
    As of version 4.0 ASP.NET natively supports routing via the now built-in System.Web.Routing namespace. Routing features are automatically integrated into the HtttpRuntime via a few custom interfaces. New Web Forms Routing Support In ASP.NET 4.0 there are a host of improvements including routing support baked into Web Forms via a RouteData property available on the Page class and RouteCollection.MapPageRoute() route handler that makes it easy to route to Web forms. To map ASP.NET Page routes is as simple as setting up the routes with MapPageRoute:protected void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } void RegisterRoutes(RouteCollection routes) { routes.MapPageRoute("StockQuote", "StockQuote/{symbol}", "StockQuote.aspx"); routes.MapPageRoute("StockQuotes", "StockQuotes/{symbolList}", "StockQuotes.aspx"); } and then accessing the route data in the page you can then use the new Page class RouteData property to retrieve the dynamic route data information:public partial class StockQuote1 : System.Web.UI.Page { protected StockQuote Quote = null; protected void Page_Load(object sender, EventArgs e) { string symbol = RouteData.Values["symbol"] as string; StockServer server = new StockServer(); Quote = server.GetStockQuote(symbol); // display stock data in Page View } } Simple, quick and doesn’t require much explanation. If you’re using WebForms most of your routing needs should be served just fine by this simple mechanism. Kudos to the ASP.NET team for putting this in the box and making it easy! How Routing Works To handle Routing in ASP.NET involves these steps: Registering Routes Creating a custom RouteHandler to retrieve an HttpHandler Attaching RouteData to your HttpHandler Picking up Route Information in your Request code Registering routes makes ASP.NET aware of the Routes you want to handle via the static RouteTable.Routes collection. You basically add routes to this collection to let ASP.NET know which URL patterns it should watch for. You typically hook up routes off a RegisterRoutes method that fires in Application_Start as I did in the example above to ensure routes are added only once when the application first starts up. When you create a route, you pass in a RouteHandler instance which ASP.NET caches and reuses as routes are matched. Once registered ASP.NET monitors the routes and if a match is found just prior to the HttpHandler instantiation, ASP.NET uses the RouteHandler registered for the route and calls GetHandler() on it to retrieve an HttpHandler instance. The RouteHandler.GetHandler() method is responsible for creating an instance of an HttpHandler that is to handle the request and – if necessary – to assign any additional custom data to the handler. At minimum you probably want to pass the RouteData to the handler so the handler can identify the request based on the route data available. To do this you typically add  a RouteData property to your handler and then assign the property from the RouteHandlers request context. This is essentially how Page.RouteData comes into being and this approach should work well for any custom handler implementation that requires RouteData. It’s a shame that ASP.NET doesn’t have a top level intrinsic object that’s accessible off the HttpContext object to provide route data more generically, but since RouteData is directly tied to HttpHandlers and not all handlers support it it might cause some confusion of when it’s actually available. Bottom line is that if you want to hold on to RouteData you have to assign it to a custom property of the handler or else pass it to the handler via Context.Items[] object that can be retrieved on an as needed basis. It’s important to understand that routing is hooked up via RouteHandlers that are responsible for loading HttpHandler instances. RouteHandlers are invoked for every request that matches a route and through this RouteHandler instance the Handler gains access to the current RouteData. Because of this logic it’s important to understand that Routing is really tied to HttpHandlers and not available prior to handler instantiation, which is pretty late in the HttpRuntime’s request pipeline. IOW, Routing works with Handlers but not with earlier in the pipeline within Modules. Specifically ASP.NET calls RouteHandler.GetHandler() from the PostResolveRequestCache HttpRuntime pipeline event. Here’s the call stack at the beginning of the GetHandler() call: which fires just before handler resolution. Non-Page Routing – You need to build custom RouteHandlers If you need to route to a custom Http Handler or other non-Page (and non-MVC) endpoint in the HttpRuntime, there is no generic mapping support available. You need to create a custom RouteHandler that can manage creating an instance of an HttpHandler that is fired in response to a routed request. Depending on what you are doing this process can be simple or fairly involved as your code is responsible based on the route data provided which handler to instantiate, and more importantly how to pass the route data on to the Handler. Luckily creating a RouteHandler is easy by implementing the IRouteHandler interface which has only a single GetHttpHandler(RequestContext context) method. In this method you can pick up the requestContext.RouteData, instantiate the HttpHandler of choice, and assign the RouteData to it. Then pass back the handler and you’re done.Here’s a simple example of GetHttpHandler() method that dynamically creates a handler based on a passed in Handler type./// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } Note that this code checks for a specific type of handler and if it matches assigns the RouteData to this handler. This is optional but quite a common scenario if you want to work with RouteData. If the handler you need to instantiate isn’t under your control but you still need to pass RouteData to Handler code, an alternative is to pass the RouteData via the HttpContext.Items collection:IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; requestContext.HttpContext.Items["RouteData"] = requestContext.RouteData; return handler; } The code in the handler implementation can then pick up the RouteData from the context collection as needed:RouteData routeData = HttpContext.Current.Items["RouteData"] as RouteData This isn’t as clean as having an explicit RouteData property, but it does have the advantage that the route data is visible anywhere in the Handler’s code chain. It’s definitely preferable to create a custom property on your handler, but the Context work-around works in a pinch when you don’t’ own the handler code and have dynamic code executing as part of the handler execution. An Example of a Custom RouteHandler: Attribute Based Route Implementation In this post I’m going to discuss a custom routine implementation I built for my CallbackHandler class in the West Wind Web & Ajax Toolkit. CallbackHandler can be very easily used for creating AJAX, REST and POX requests following RPC style method mapping. You can pass parameters via URL query string, POST data or raw data structures, and you can retrieve results as JSON, XML or raw string/binary data. It’s a quick and easy way to build service interfaces with no fuss. As a quick review here’s how CallbackHandler works: You create an Http Handler that derives from CallbackHandler You implement methods that have a [CallbackMethod] Attribute and that’s it. Here’s an example of an CallbackHandler implementation in an ashx.cs based handler:// RestService.ashx.cs public class RestService : CallbackHandler { [CallbackMethod] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } } CallbackHandler makes it super easy to create a method on the server, pass data to it via POST, QueryString or raw JSON/XML data, and then retrieve the results easily back in various formats. This works wonderful and I’ve used these tools in many projects for myself and with clients. But one thing missing has been the ability to create clean URLs. Typical URLs looked like this: http://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuote&symbol=msfthttp://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuotes&symbolList=msft,intc,gld,slw,mwe&format=xml which works and is clear enough, but also clearly very ugly. It would be much nicer if URLs could look like this: http://www.west-wind.com//WestwindWebtoolkit/Samples/StockQuote/msfthttp://www.west-wind.com/WestwindWebtoolkit/Samples/StockQuotes/msft,intc,gld,slw?format=xml (the Virtual Root in this sample is WestWindWebToolkit/Samples and StockQuote/{symbol} is the route)(If you use FireFox try using the JSONView plug-in make it easier to view JSON content) So, taking a clue from the WCF REST tools that use RouteUrls I set out to create a way to specify RouteUrls for each of the endpoints. The change made basically allows changing the above to: [CallbackMethod(RouteUrl="RestService/StockQuote/{symbol}")] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod(RouteUrl = "RestService/StockQuotes/{symbolList}")] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } where a RouteUrl is specified as part of the Callback attribute. And with the changes made with RouteUrls I can now get URLs like the second set shown earlier. So how does that work? Let’s find out… How to Create Custom Routes As mentioned earlier Routing is made up of several steps: Creating a custom RouteHandler to create HttpHandler instances Mapping the actual Routes to the RouteHandler Retrieving the RouteData and actually doing something useful with it in the HttpHandler In the CallbackHandler routing example above this works out to something like this: Create a custom RouteHandler that includes a property to track the method to call Set up the routes using Reflection against the class Looking for any RouteUrls in the CallbackMethod attribute Add a RouteData property to the CallbackHandler so we can access the RouteData in the code of the handler Creating a Custom Route Handler To make the above work I created a custom RouteHandler class that includes the actual IRouteHandler implementation as well as a generic and static method to automatically register all routes marked with the [CallbackMethod(RouteUrl="…")] attribute. Here’s the code:/// <summary> /// Route handler that can create instances of CallbackHandler derived /// callback classes. The route handler tracks the method name and /// creates an instance of the service in a predictable manner /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler type</typeparam> public class CallbackHandlerRouteHandler : IRouteHandler { /// <summary> /// Method name that is to be called on this route. /// Set by the automatically generated RegisterRoutes /// invokation. /// </summary> public string MethodName { get; set; } /// <summary> /// The type of the handler we're going to instantiate. /// Needed so we can semi-generically instantiate the /// handler and call the method on it. /// </summary> public Type CallbackHandlerType { get; set; } /// <summary> /// Constructor to pass in the two required components we /// need to create an instance of our handler. /// </summary> /// <param name="methodName"></param> /// <param name="callbackHandlerType"></param> public CallbackHandlerRouteHandler(string methodName, Type callbackHandlerType) { MethodName = methodName; CallbackHandlerType = callbackHandlerType; } /// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } /// <summary> /// Generic method to register all routes from a CallbackHandler /// that have RouteUrls defined on the [CallbackMethod] attribute /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler Type</typeparam> /// <param name="routes"></param> public static void RegisterRoutes<TCallbackHandler>(RouteCollection routes) { // find all methods var methods = typeof(TCallbackHandler).GetMethods(BindingFlags.Instance | BindingFlags.Public); foreach (var method in methods) { var attrs = method.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (attrs.Length < 1) continue; CallbackMethodAttribute attr = attrs[0] as CallbackMethodAttribute; if (string.IsNullOrEmpty(attr.RouteUrl)) continue; // Add the route routes.Add(method.Name, new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler)))); } } } The RouteHandler implements IRouteHandler, and its responsibility via the GetHandler method is to create an HttpHandler based on the route data. When ASP.NET calls GetHandler it passes a requestContext parameter which includes a requestContext.RouteData property. This parameter holds the current request’s route data as well as an instance of the current RouteHandler. If you look at GetHttpHandler() you can see that the code creates an instance of the handler we are interested in and then sets the RouteData property on the handler. This is how you can pass the current request’s RouteData to the handler. The RouteData object also has a  RouteData.RouteHandler property that is also available to the Handler later, which is useful in order to get additional information about the current route. In our case here the RouteHandler includes a MethodName property that identifies the method to execute in the handler since that value no longer comes from the URL so we need to figure out the method name some other way. The method name is mapped explicitly when the RouteHandler is created and here the static method that auto-registers all CallbackMethods with RouteUrls sets the method name when it creates the routes while reflecting over the methods (more on this in a minute). The important point here is that you can attach additional properties to the RouteHandler and you can then later access the RouteHandler and its properties later in the Handler to pick up these custom values. This is a crucial feature in that the RouteHandler serves in passing additional context to the handler so it knows what actions to perform. The automatic route registration is handled by the static RegisterRoutes<TCallbackHandler> method. This method is generic and totally reusable for any CallbackHandler type handler. To register a CallbackHandler and any RouteUrls it has defined you simple use code like this in Application_Start (or other application startup code):protected void Application_Start(object sender, EventArgs e) { // Register Routes for RestService CallbackHandlerRouteHandler.RegisterRoutes<RestService>(RouteTable.Routes); } If you have multiple CallbackHandler style services you can make multiple calls to RegisterRoutes for each of the service types. RegisterRoutes internally uses reflection to run through all the methods of the Handler, looking for CallbackMethod attributes and whether a RouteUrl is specified. If it is a new instance of a CallbackHandlerRouteHandler is created and the name of the method and the type are set. routes.Add(method.Name,           new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler) )) ); While the routing with CallbackHandlerRouteHandler is set up automatically for all methods that use the RouteUrl attribute, you can also use code to hook up those routes manually and skip using the attribute. The code for this is straightforward and just requires that you manually map each individual route to each method you want a routed: protected void Application_Start(objectsender, EventArgs e){    RegisterRoutes(RouteTable.Routes);}void RegisterRoutes(RouteCollection routes) { routes.Add("StockQuote Route",new Route("StockQuote/{symbol}",                     new CallbackHandlerRouteHandler("GetStockQuote",typeof(RestService) ) ) );     routes.Add("StockQuotes Route",new Route("StockQuotes/{symbolList}",                     new CallbackHandlerRouteHandler("GetStockQuotes",typeof(RestService) ) ) );}I think it’s clearly easier to have CallbackHandlerRouteHandler.RegisterRoutes() do this automatically for you based on RouteUrl attributes, but some people have a real aversion to attaching logic via attributes. Just realize that the option to manually create your routes is available as well. Using the RouteData in the Handler A RouteHandler’s responsibility is to create an HttpHandler and as mentioned earlier, natively IHttpHandler doesn’t have any support for RouteData. In order to utilize RouteData in your handler code you have to pass the RouteData to the handler. In my CallbackHandlerRouteHandler when it creates the HttpHandler instance it creates the instance and then assigns the custom RouteData property on the handler:IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; Again this only works if you actually add a RouteData property to your handler explicitly as I did in my CallbackHandler implementation:/// <summary> /// Optionally store RouteData on this handler /// so we can access it internally /// </summary> public RouteData RouteData {get; set; } and the RouteHandler needs to set it when it creates the handler instance. Once you have the route data in your handler you can access Route Keys and Values and also the RouteHandler. Since my RouteHandler has a custom property for the MethodName to retrieve it from within the handler I can do something like this now to retrieve the MethodName (this example is actually not in the handler but target is an instance pass to the processor): // check for Route Data method name if (target is CallbackHandler) { var routeData = ((CallbackHandler)target).RouteData; if (routeData != null) methodToCall = ((CallbackHandlerRouteHandler)routeData.RouteHandler).MethodName; } When I need to access the dynamic values in the route ( symbol in StockQuote/{symbol}) I can retrieve it easily with the Values collection (RouteData.Values["symbol"]). In my CallbackHandler processing logic I’m basically looking for matching parameter names to Route parameters: // look for parameters in the routeif(routeData != null){    string parmString = routeData.Values[parameter.Name] as string;    adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType);} And with that we’ve come full circle. We’ve created a custom RouteHandler() that passes the RouteData to the handler it creates. We’ve registered our routes to use the RouteHandler, and we’ve utilized the route data in our handler. For completeness sake here’s the routine that executes a method call based on the parameters passed in and one of the options is to retrieve the inbound parameters off RouteData (as well as from POST data or QueryString parameters):internal object ExecuteMethod(string method, object target, string[] parameters, CallbackMethodParameterType paramType, ref CallbackMethodAttribute callbackMethodAttribute) { HttpRequest Request = HttpContext.Current.Request; object Result = null; // Stores parsed parameters (from string JSON or QUeryString Values) object[] adjustedParms = null; Type PageType = target.GetType(); MethodInfo MI = PageType.GetMethod(method, BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic); if (MI == null) throw new InvalidOperationException("Invalid Server Method."); object[] methods = MI.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (methods.Length < 1) throw new InvalidOperationException("Server method is not accessible due to missing CallbackMethod attribute"); if (callbackMethodAttribute != null) callbackMethodAttribute = methods[0] as CallbackMethodAttribute; ParameterInfo[] parms = MI.GetParameters(); JSONSerializer serializer = new JSONSerializer(); RouteData routeData = null; if (target is CallbackHandler) routeData = ((CallbackHandler)target).RouteData; int parmCounter = 0; adjustedParms = new object[parms.Length]; foreach (ParameterInfo parameter in parms) { // Retrieve parameters out of QueryString or POST buffer if (parameters == null) { // look for parameters in the route if (routeData != null) { string parmString = routeData.Values[parameter.Name] as string; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // GET parameter are parsed as plain string values - no JSON encoding else if (HttpContext.Current.Request.HttpMethod == "GET") { // Look up the parameter by name string parmString = Request.QueryString[parameter.Name]; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // POST parameters are treated as methodParameters that are JSON encoded else if (paramType == CallbackMethodParameterType.Json) //string newVariable = methodParameters.GetValue(parmCounter) as string; adjustedParms[parmCounter] = serializer.Deserialize(Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject( Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); } else if (paramType == CallbackMethodParameterType.Json) adjustedParms[parmCounter] = serializer.Deserialize(parameters[parmCounter], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject(parameters[parmCounter], parameter.ParameterType); parmCounter++; } Result = MI.Invoke(target, adjustedParms); return Result; } The code basically uses Reflection to loop through all the parameters available on the method and tries to assign the parameters from RouteData, QueryString or POST variables. The parameters are converted into their appropriate types and then used to eventually make a Reflection based method call. What’s sweet is that the RouteData retrieval is just another option for dealing with the inbound data in this scenario and it adds exactly two lines of code plus the code to retrieve the MethodName I showed previously – a seriously low impact addition that adds a lot of extra value to this endpoint callback processing implementation. Debugging your Routes If you create a lot of routes it’s easy to run into Route conflicts where multiple routes have the same path and overlap with each other. This can be difficult to debug especially if you are using automatically generated routes like the routes created by CallbackHandlerRouteHandler.RegisterRoutes. Luckily there’s a tool that can help you out with this nicely. Phill Haack created a RouteDebugging tool you can download and add to your project. The easiest way to do this is to grab and add this to your project is to use NuGet (Add Library Package from your Project’s Reference Nodes):   which adds a RouteDebug assembly to your project. Once installed you can easily debug your routes with this simple line of code which needs to be installed at application startup:protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); // Debug your routes RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes); } Any routed URL then displays something like this: The screen shows you your current route data and all the routes that are mapped along with a flag that displays which route was actually matched. This is useful – if you have any overlap of routes you will be able to see which routes are triggered – the first one in the sequence wins. This tool has saved my ass on a few occasions – and with NuGet now it’s easy to add it to your project in a few seconds and then remove it when you’re done. Routing Around Custom routing seems slightly complicated on first blush due to its disconnected components of RouteHandler, route registration and mapping of custom handlers. But once you understand the relationship between a RouteHandler, the RouteData and how to pass it to a handler, utilizing of Routing becomes a lot easier as you can easily pass context from the registration to the RouteHandler and through to the HttpHandler. The most important thing to understand when building custom routing solutions is to figure out how to map URLs in such a way that the handler can figure out all the pieces it needs to process the request. This can be via URL routing parameters and as I did in my example by passing additional context information as part of the RouteHandler instance that provides the proper execution context. In my case this ‘context’ was the method name, but it could be an actual static value like an enum identifying an operation or category in an application. Basically user supplied data comes in through the url and static application internal data can be passed via RouteHandler property values. Routing can make your application URLs easier to read by non-techie types regardless of whether you’re building Service type or REST applications, or full on Web interfaces. Routing in ASP.NET 4.0 makes it possible to create just about any extensionless URLs you can dream up and custom RouteHanmdler References Sample ProjectIncludes the sample CallbackHandler service discussed here along with compiled versionsof the Westwind.Web and Westwind.Utilities assemblies.  (requires .NET 4.0/VS 2010) West Wind Web Toolkit includes full implementation of CallbackHandler and the Routing Handler West Wind Web Toolkit Source CodeContains the full source code to the Westwind.Web and Westwind.Utilities assemblies usedin these samples. Includes the source described in the post.(Latest build in the Subversion Repository) CallbackHandler Source(Relevant code to this article tree in Westwind.Web assembly) JSONView FireFoxPluginA simple FireFox Plugin to easily view JSON data natively in FireFox.For IE you can use a registry hack to display JSON as raw text.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  HTTP  

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Java Cloud Service Integration to REST Service

    - by Jani Rautiainen
    Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance. This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance. In this article a custom application integrating with REST service will be implemented. We will use REST services provided by Taleo as an example; however the same approach will work with any REST service. In this example the data from the REST service is used to populate a dynamic table. Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration.Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source.The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the "Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud" link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guideFor details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Access to a local database The database associated with the JCS instance cannot be connected to with JDBC.  Since creating ADFbc business component requires a JDBC connection we will need access to a local database. 3rd party libraries This example will use some 3rd party libraries for implementing the REST service call and processing the input / output content. Other libraries may also be used, however these are tested to work. Jersey 1.x Jersey library will be used as a client to make the call to the REST service. JCS documentation for supported specifications states: Java API for RESTful Web Services (JAX-RS) 1.1 So Jersey 1.x will be used. Download the single-JAR Jersey bundle; in this example Jersey 1.18 JAR bundle is used. Json-simple Jjson-simple library will be used to process the json objects. Download the  JAR file; in this example json-simple-1.1.1.jar is used. Accessing data in Taleo Before implementing the application it is beneficial to familiarize oneself with the data in Taleo. Easiest way to do this is by using a RESTClient on your browser. Once added to the browser you can access the UI: The client can be used to call the REST services to test the URLs and data before adding them into the application. First derive the base URL for the service this can be done with: Method: GET URL: https://tbe.taleo.net/MANAGER/dispatcher/api/v1/serviceUrl/<company name> The response will contain the base URL to be used for the service calls for the company. Next obtain authentication token with: Method: POST URL: https://ch.tbe.taleo.net/CH07/ats/api/v1/login?orgCode=<company>&userName=<user name>&password=<password> The response includes an authentication token that can be used for few hours to authenticate with the service: {   "response": {     "authToken": "webapi26419680747505890557"   },   "status": {     "detail": {},     "success": true   } } To authenticate the service calls navigate to "Headers -> Custom Header": And add a new request header with: Name: Cookie Value: authToken=webapi26419680747505890557 Once authentication token is defined the tool can be used to invoke REST services; for example: Method: GET URL: https://ch.tbe.taleo.net/CH07/ats/api/v1/object/candidate/search.xml?status=16 This data will be used on the application to be created. For details on the Taleo REST services refer to the Taleo Business Edition REST API Guide. Create Application First Fusion Web Application is created and configured. Start JDeveloper and click "New Application": Application Name: JcsRestDemo Application Package Prefix: oracle.apps.jcs.test Application Template: Fusion Web Application (ADF) Configure Local Cloud Connection Follow the steps documented in the "Java Cloud Service ADF Web Application" article to configure a local database connection needed to create the ADFbc objects. Configure Libraries Add the 3rd party libraries into the class path. Create the following directory and copy the jar files into it: <JDEV_USER_HOME>/JcsRestDemo/lib  Select the "Model" project, navigate "Application -> Project Properties -> Libraries and Classpath -> Add JAR / Directory" and add the 2 3rd party libraries: Accessing Data from Taleo To access data from Taleo using the REST service the 3rd party libraries will be used. 2 Java classes are implemented, one representing the Candidate object and another for accessing the Taleo repository Candidate Candidate object is a POJO object used to represent the candidate data obtained from the Taleo repository. The data obtained will be used to populate the ADFbc object used to display the data on the UI. The candidate object contains simply the variables we obtain using the REST services and the getters / setters for them: Navigate "New -> General -> Java -> Java Class", enter "Candidate" as the name and create it in the package "oracle.apps.jcs.test.model".  Copy / paste the following as the content: import oracle.jbo.domain.Number; public class Candidate { private Number candId; private String firstName; private String lastName; public Candidate() { super(); } public Candidate(Number candId, String firstName, String lastName) { super(); this.candId = candId; this.firstName = firstName; this.lastName = lastName; } public void setCandId(Number candId) { this.candId = candId; } public Number getCandId() { return candId; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getFirstName() { return firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getLastName() { return lastName; } } Taleo Repository Taleo repository class will interact with the Taleo REST services. The logic will query data from Taleo and populate Candidate objects with the data. The Candidate object will then be used to populate the ADFbc object used to display data on the UI. Navigate "New -> General -> Java -> Java Class", enter "TaleoRepository" as the name and create it in the package "oracle.apps.jcs.test.model".  Copy / paste the following as the content (for details of the implementation refer to the documentation in the code): import com.sun.jersey.api.client.Client; import com.sun.jersey.api.client.ClientResponse; import com.sun.jersey.api.client.WebResource; import com.sun.jersey.core.util.MultivaluedMapImpl; import java.io.StringReader; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import java.util.Map; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import oracle.jbo.domain.Number; import org.json.simple.JSONArray; import org.json.simple.JSONObject; import org.json.simple.parser.JSONParser; /** * This class interacts with the Taleo REST services */ public class TaleoRepository { /** * Connection information needed to access the Taleo services */ String _company = null; String _userName = null; String _password = null; /** * Jersey client used to access the REST services */ Client _client = null; /** * Parser for processing the JSON objects used as * input / output for the services */ JSONParser _parser = null; /** * The base url for constructing the REST URLs. This is obtained * from Taleo with a service call */ String _baseUrl = null; /** * Authentication token obtained from Taleo using a service call. * The token can be used to authenticate on subsequent * service calls. The token will expire in 4 hours */ String _authToken = null; /** * Static url that can be used to obtain the url used to construct * service calls for a given company */ private static String _taleoUrl = "https://tbe.taleo.net/MANAGER/dispatcher/api/v1/serviceUrl/"; /** * Default constructor for the repository * Authentication details are passed as parameters and used to generate * authentication token. Note that each service call will * generate its own token. This is done to avoid dealing with the expiry * of the token. Also only 20 tokens are allowed per user simultaneously. * So instead for each call there is login / logout. * * @param company the company for which the service calls are made * @param userName the user name to authenticate with * @param password the password to authenticate with. */ public TaleoRepository(String company, String userName, String password) { super(); _company = company; _userName = userName; _password = password; _client = Client.create(); _parser = new JSONParser(); _baseUrl = getBaseUrl(); } /** * This obtains the base url for a company to be used * to construct the urls for service calls * @return base url for the service calls */ private String getBaseUrl() { String result = null; if (null != _baseUrl) { result = _baseUrl; } else { try { String company = _company; WebResource resource = _client.resource(_taleoUrl + company); ClientResponse response = resource.type(MediaType.APPLICATION_FORM_URLENCODED_TYPE).get(ClientResponse.class); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); JSONObject jsonResponse = (JSONObject)jsonObject.get("response"); result = (String)jsonResponse.get("URL"); } catch (Exception ex) { ex.printStackTrace(); } } return result; } /** * Generates authentication token, that can be used to authenticate on * subsequent service calls. Note that each service call will * generate its own token. This is done to avoid dealing with the expiry * of the token. Also only 20 tokens are allowed per user simultaneously. * So instead for each call there is login / logout. * @return authentication token that can be used to authenticate on * subsequent service calls */ private String login() { String result = null; try { MultivaluedMap<String, String> formData = new MultivaluedMapImpl(); formData.add("orgCode", _company); formData.add("userName", _userName); formData.add("password", _password); WebResource resource = _client.resource(_baseUrl + "login"); ClientResponse response = resource.type(MediaType.APPLICATION_FORM_URLENCODED_TYPE).post(ClientResponse.class, formData); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); JSONObject jsonResponse = (JSONObject)jsonObject.get("response"); result = (String)jsonResponse.get("authToken"); } catch (Exception ex) { throw new RuntimeException("Unable to login ", ex); } if (null == result) throw new RuntimeException("Unable to login "); return result; } /** * Releases a authentication token. Each call to login must be followed * by call to logout after the processing is done. This is required as * the tokens are limited to 20 per user and if not released the tokens * will only expire after 4 hours. * @param authToken */ private void logout(String authToken) { WebResource resource = _client.resource(_baseUrl + "logout"); resource.header("cookie", "authToken=" + authToken).post(ClientResponse.class); } /** * This method is used to obtain a list of candidates using a REST * service call. At this example the query is hard coded to query * based on status. The url constructed to access the service is: * <_baseUrl>/object/candidate/search.xml?status=16 * @return List of candidates obtained with the service call */ public List<Candidate> getCandidates() { List<Candidate> result = new ArrayList<Candidate>(); try { // First login, note that in finally block we must have logout _authToken = "authToken=" + login(); /** * Construct the URL, the resulting url will be: * <_baseUrl>/object/candidate/search.xml?status=16 */ MultivaluedMap<String, String> formData = new MultivaluedMapImpl(); formData.add("status", "16"); JSONArray searchResults = (JSONArray)getTaleoResource("object/candidate/search", "searchResults", formData); /** * Process the results, the resulting JSON object is something like * this (simplified for readability): * * { * "response": * { * "searchResults": * [ * { * "candidate": * { * "candId": 211, * "firstName": "Mary", * "lastName": "Stochi", * logic here will find the candidate object(s), obtain the desired * data from them, construct a Candidate object based on the data * and add it to the results. */ for (Object object : searchResults) { JSONObject temp = (JSONObject)object; JSONObject candidate = (JSONObject)findObject(temp, "candidate"); Long candIdTemp = (Long)candidate.get("candId"); Number candId = (null == candIdTemp ? null : new Number(candIdTemp)); String firstName = (String)candidate.get("firstName"); String lastName = (String)candidate.get("lastName"); result.add(new Candidate(candId, firstName, lastName)); } } catch (Exception ex) { ex.printStackTrace(); } finally { if (null != _authToken) logout(_authToken); } return result; } /** * Convenience method to construct url for the service call, invoke the * service and obtain a resource from the response * @param path the path for the service to be invoked. This is combined * with the base url to construct a url for the service * @param resource the key for the object in the response that will be * obtained * @param parameters any parameters used for the service call. The call * is slightly different depending whether parameters exist or not. * @return the resource from the response for the service call */ private Object getTaleoResource(String path, String resource, MultivaluedMap<String, String> parameters) { Object result = null; try { WebResource webResource = _client.resource(_baseUrl + path); ClientResponse response = null; if (null == parameters) response = webResource.header("cookie", _authToken).get(ClientResponse.class); else response = webResource.queryParams(parameters).header("cookie", _authToken).get(ClientResponse.class); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); result = findObject(jsonObject, resource); } catch (Exception ex) { ex.printStackTrace(); } return result; } /** * Convenience method to recursively find a object with an key * traversing down from a given root object. This will traverse a * JSONObject / JSONArray recursively to find a matching key, if found * the object with the key is returned. * @param root root object which contains the key searched for * @param key the key for the object to search for * @return the object matching the key */ private Object findObject(Object root, String key) { Object result = null; if (root instanceof JSONObject) { JSONObject rootJSON = (JSONObject)root; if (rootJSON.containsKey(key)) { result = rootJSON.get(key); } else { Iterator children = rootJSON.entrySet().iterator(); while (children.hasNext()) { Map.Entry entry = (Map.Entry)children.next(); Object child = entry.getValue(); if (child instanceof JSONObject || child instanceof JSONArray) { result = findObject(child, key); if (null != result) break; } } } } else if (root instanceof JSONArray) { JSONArray rootJSON = (JSONArray)root; for (Object child : rootJSON) { if (child instanceof JSONObject || child instanceof JSONArray) { result = findObject(child, key); if (null != result) break; } } } return result; } }   Creating Business Objects While JCS application can be created without a local database, the local database is required when using ADFbc objects even if database objects are not referred. For this example we will create a "Transient" view object that will be programmatically populated based the data obtained from Taleo REST services. Creating ADFbc objects Choose the "Model" project and navigate "New -> Business Tier : ADF Business Components : View Object". On the "Initialize Business Components Project" choose the local database connection created in previous step. On Step 1 enter "JcsRestDemoVO" on the "Name" and choose "Rows populated programmatically, not based on query": On step 2 create the following attributes: CandId Type: Number Updatable: Always Key Attribute: checked Name Type: String Updatable: Always On steps 3 and 4 accept defaults and click "Next".  On step 5 check the "Application Module" checkbox and enter "JcsRestDemoAM" as the name: Click "Finish" to generate the objects. Populating the VO To display the data on the UI the "transient VO" is populated programmatically based on the data obtained from the Taleo REST services. Open the "JcsRestDemoVOImpl.java". Copy / paste the following as the content (for details of the implementation refer to the documentation in the code): import java.sql.ResultSet; import java.util.List; import java.util.ListIterator; import oracle.jbo.server.ViewObjectImpl; import oracle.jbo.server.ViewRowImpl; import oracle.jbo.server.ViewRowSetImpl; // --------------------------------------------------------------------- // --- File generated by Oracle ADF Business Components Design Time. // --- Tue Feb 18 09:40:25 PST 2014 // --- Custom code may be added to this class. // --- Warning: Do not modify method signatures of generated methods. // --------------------------------------------------------------------- public class JcsRestDemoVOImpl extends ViewObjectImpl { /** * This is the default constructor (do not remove). */ public JcsRestDemoVOImpl() { } @Override public void executeQuery() { /** * For some reason we need to reset everything, otherwise * 2nd entry to the UI screen may fail with * "java.util.NoSuchElementException" in createRowFromResultSet * call to "candidates.next()". I am not sure why this is happening * as the Iterator is new and "hasNext" is true at the point * of the execution. My theory is that since the iterator object is * exactly the same the VO cache somehow reuses the iterator including * the pointer that has already exhausted the iterable elements on the * previous run. Working around the issue * here by cleaning out everything on the VO every time before query * is executed on the VO. */ getViewDef().setQuery(null); getViewDef().setSelectClause(null); setQuery(null); this.reset(); this.clearCache(); super.executeQuery(); } /** * executeQueryForCollection - overridden for custom java data source support. */ protected void executeQueryForCollection(Object qc, Object[] params, int noUserParams) { /** * Integrate with the Taleo REST services using TaleoRepository class. * A list of candidates matching a hard coded query is obtained. */ TaleoRepository repository = new TaleoRepository(<company>, <username>, <password>); List<Candidate> candidates = repository.getCandidates(); /** * Store iterator for the candidates as user data on the collection. * This will be used in createRowFromResultSet to create rows based on * the custom iterator. */ ListIterator<Candidate> candidatescIterator = candidates.listIterator(); setUserDataForCollection(qc, candidatescIterator); super.executeQueryForCollection(qc, params, noUserParams); } /** * hasNextForCollection - overridden for custom java data source support. */ protected boolean hasNextForCollection(Object qc) { boolean result = false; /** * Determines whether there are candidates for which to create a row */ ListIterator<Candidate> candidates = (ListIterator<Candidate>)getUserDataForCollection(qc); result = candidates.hasNext(); /** * If all candidates to be created indicate that processing is done */ if (!result) { setFetchCompleteForCollection(qc, true); } return result; } /** * createRowFromResultSet - overridden for custom java data source support. */ protected ViewRowImpl createRowFromResultSet(Object qc, ResultSet resultSet) { /** * Obtain the next candidate from the collection and create a row * for it. */ ListIterator<Candidate> candidates = (ListIterator<Candidate>)getUserDataForCollection(qc); ViewRowImpl row = createNewRowForCollection(qc); try { Candidate candidate = candidates.next(); row.setAttribute("CandId", candidate.getCandId()); row.setAttribute("Name", candidate.getFirstName() + " " + candidate.getLastName()); } catch (Exception e) { e.printStackTrace(); } return row; } /** * getQueryHitCount - overridden for custom java data source support. */ public long getQueryHitCount(ViewRowSetImpl viewRowSet) { /** * For this example this is not implemented rather we always return 0. */ return 0; } } Creating UI Choose the "ViewController" project and navigate "New -> Web Tier : JSF : JSF Page". On the "Create JSF Page" enter "JcsRestDemo" as name and ensure that the "Create as XML document (*.jspx)" is checked.  Open "JcsRestDemo.jspx" and navigate to "Data Controls -> JcsRestDemoAMDataControl -> JcsRestDemoVO1" and drag & drop the VO to the "<af:form> " as a "ADF Read-only Table": Accept the defaults in "Edit Table Columns". To execute the query navigate to to "Data Controls -> JcsRestDemoAMDataControl -> JcsRestDemoVO1 -> Operations -> Execute" and drag & drop the operation to the "<af:form> " as a "Button": Deploying to JCS Follow the same steps as documented in previous article"Java Cloud Service ADF Web Application". Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsRestDemo-ViewController-context-root/faces/JcsRestDemo.jspx The UI displays a list of candidates obtained from the Taleo REST Services: Summary In this article we learned how to integrate with REST services using Jersey library in JCS. In future articles various other integration techniques will be covered.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Class Mapping Error: 'T' must be a non-abstract type with a public parameterless constructor

    - by Amit Ranjan
    Hi, While mapping class i am getting error 'T' must be a non-abstract type with a public parameterless constructor in order to use it as parameter 'T' in the generic type or method. Below is my SqlReaderBase Class public abstract class SqlReaderBase<T> : ConnectionProvider { #region Abstract Methods protected abstract string commandText { get; } protected abstract CommandType commandType { get; } protected abstract Collection<IDataParameter> GetParameters(IDbCommand command); **protected abstract MapperBase<T> GetMapper();** #endregion #region Non Abstract Methods /// <summary> /// Method to Execute Select Queries for Retrieveing List of Result /// </summary> /// <returns></returns> public Collection<T> ExecuteReader() { //Collection of Type on which Template is applied Collection<T> collection = new Collection<T>(); // initializing connection using (IDbConnection connection = GetConnection()) { try { // creates command for sql operations IDbCommand command = connection.CreateCommand(); // assign connection to command command.Connection = connection; // assign query command.CommandText = commandText; //state what type of query is used, text, table or Sp command.CommandType = commandType; // retrieves parameter from IDataParameter Collection and assigns it to command object foreach (IDataParameter param in GetParameters(command)) command.Parameters.Add(param); // Establishes connection with database server connection.Open(); // Since it is designed for executing Select statements that will return a list of results // so we will call command's execute reader method that return a Forward Only reader with // list of results inside. using (IDataReader reader = command.ExecuteReader()) { try { // Call to Mapper Class of the template to map the data to its // respective fields MapperBase<T> mapper = GetMapper(); collection = mapper.MapAll(reader); } catch (Exception ex) // catch exception { throw ex; // log errr } finally { reader.Close(); reader.Dispose(); } } } catch (Exception ex) { throw ex; } finally { connection.Close(); connection.Dispose(); } } return collection; } #endregion } What I am trying to do is , I am executine some command and filling my class dynamically. The class is given below: namespace FooZo.Core { public class Restaurant { #region Private Member Variables private int _restaurantId = 0; private string _email = string.Empty; private string _website = string.Empty; private string _name = string.Empty; private string _address = string.Empty; private string _phone = string.Empty; private bool _hasMenu = false; private string _menuImagePath = string.Empty; private int _cuisine = 0; private bool _hasBar = false; private bool _hasHomeDelivery = false; private bool _hasDineIn = false; private int _type = 0; private string _restaurantImagePath = string.Empty; private string _serviceAvailableTill = string.Empty; private string _serviceAvailableFrom = string.Empty; public string Name { get { return _name; } set { _name = value; } } public string Address { get { return _address; } set { _address = value; } } public int RestaurantId { get { return _restaurantId; } set { _restaurantId = value; } } public string Website { get { return _website; } set { _website = value; } } public string Email { get { return _email; } set { _email = value; } } public string Phone { get { return _phone; } set { _phone = value; } } public bool HasMenu { get { return _hasMenu; } set { _hasMenu = value; } } public string MenuImagePath { get { return _menuImagePath; } set { _menuImagePath = value; } } public string RestaurantImagePath { get { return _restaurantImagePath; } set { _restaurantImagePath = value; } } public int Type { get { return _type; } set { _type = value; } } public int Cuisine { get { return _cuisine; } set { _cuisine = value; } } public bool HasBar { get { return _hasBar; } set { _hasBar = value; } } public bool HasHomeDelivery { get { return _hasHomeDelivery; } set { _hasHomeDelivery = value; } } public bool HasDineIn { get { return _hasDineIn; } set { _hasDineIn = value; } } public string ServiceAvailableFrom { get { return _serviceAvailableFrom; } set { _serviceAvailableFrom = value; } } public string ServiceAvailableTill { get { return _serviceAvailableTill; } set { _serviceAvailableTill = value; } } #endregion public Restaurant() { } } } For filling my class properties dynamically i have another class called MapperBase Class with following methods: public abstract class MapperBase<T> where T : new() { protected T Map(IDataRecord record) { T instance = new T(); string fieldName; PropertyInfo[] properties = typeof(T).GetProperties(); for (int i = 0; i < record.FieldCount; i++) { fieldName = record.GetName(i); foreach (PropertyInfo property in properties) { if (property.Name == fieldName) { property.SetValue(instance, record[i], null); } } } return instance; } public Collection<T> MapAll(IDataReader reader) { Collection<T> collection = new Collection<T>(); while (reader.Read()) { collection.Add(Map(reader)); } return collection; } } There is another class which inherits the SqlreaderBaseClass called DefaultSearch. Code is below public class DefaultSearch: SqlReaderBase<Restaurant> { protected override string commandText { get { return "Select Name from vw_Restaurants"; } } protected override CommandType commandType { get { return CommandType.Text; } } protected override Collection<IDataParameter> GetParameters(IDbCommand command) { Collection<IDataParameter> parameters = new Collection<IDataParameter>(); parameters.Clear(); return parameters; } protected override MapperBase<Restaurant> GetMapper() { MapperBase<Restaurant> mapper = new RMapper(); return mapper; } } But whenever I tried to build , I am getting error 'T' must be a non-abstract type with a public parameterless constructor in order to use it as parameter 'T' in the generic type or method. Even T here is Restaurant has a Parameterless Public constructor.

    Read the article

  • Compare images after canny edge detection in OpenCV (C++)

    - by typoknig
    Hi all, I am working on an OpenCV project and I need to compare some images after canny has been applied to both of them. Before the canny was applied I had the gray scale images populating a histogram and then I compared the histograms, but when canny is added to the images the histogram does not populate. I have read that a canny image can populate a histogram, but have not found a way to make it happen. I do not necessairly need to keep using the histograms, I just want to know the best way to compare two canny images. SSCCE below for you to chew on. I have poached and patched about 75% of this code from books and various sites on the internet, so props to those guys... // SLC (Histogram).cpp : Defines the entry point for the console application. #include "stdafx.h" #include <cxcore.h> #include <cv.h> #include <cvaux.h> #include <highgui.h> #include <stdio.h> #include <sstream> #include <iostream> using namespace std; IplImage* image1= 0; IplImage* imgHistogram1 = 0; IplImage* gray1= 0; CvHistogram* hist1; int main(){ CvCapture* capture = cvCaptureFromCAM(0); if(!cvQueryFrame(capture)){ cout<<"Video capture failed, please check the camera."<<endl; } else{ cout<<"Video camera capture successful!"<<endl; }; CvSize sz = cvGetSize(cvQueryFrame(capture)); IplImage* image = cvCreateImage(sz, 8, 3); IplImage* imgHistogram = 0; IplImage* gray = 0; CvHistogram* hist; cvNamedWindow("Image Source",1); cvNamedWindow("gray", 1); cvNamedWindow("Histogram",1); cvNamedWindow("BG", 1); cvNamedWindow("FG", 1); cvNamedWindow("Canny",1); cvNamedWindow("Canny1", 1); image1 = cvLoadImage("image bin/use this image.jpg");// an image has to load here or the program will not run //size of the histogram -1D histogram int bins1 = 256; int hsize1[] = {bins1}; //max and min value of the histogram float max_value1 = 0, min_value1 = 0; //value and normalized value float value1; int normalized1; //ranges - grayscale 0 to 256 float xranges1[] = { 0, 256 }; float* ranges1[] = { xranges1 }; //create an 8 bit single channel image to hold a //grayscale version of the original picture gray1 = cvCreateImage( cvGetSize(image1), 8, 1 ); cvCvtColor( image1, gray1, CV_BGR2GRAY ); IplImage* canny1 = cvCreateImage(cvGetSize(gray1), 8, 1 ); cvCanny( gray1, canny1, 55, 175, 3 ); //Create 3 windows to show the results cvNamedWindow("original1",1); cvNamedWindow("gray1",1); cvNamedWindow("histogram1",1); //planes to obtain the histogram, in this case just one IplImage* planes1[] = { canny1 }; //get the histogram and some info about it hist1 = cvCreateHist( 1, hsize1, CV_HIST_ARRAY, ranges1,1); cvCalcHist( planes1, hist1, 0, NULL); cvGetMinMaxHistValue( hist1, &min_value1, &max_value1); printf("min: %f, max: %f\n", min_value1, max_value1); //create an 8 bits single channel image to hold the histogram //paint it white imgHistogram1 = cvCreateImage(cvSize(bins1, 50),8,1); cvRectangle(imgHistogram1, cvPoint(0,0), cvPoint(256,50), CV_RGB(255,255,255),-1); //draw the histogram :P for(int i=0; i < bins1; i++){ value1 = cvQueryHistValue_1D( hist1, i); normalized1 = cvRound(value1*50/max_value1); cvLine(imgHistogram1,cvPoint(i,50), cvPoint(i,50-normalized1), CV_RGB(0,0,0)); } //show the image results cvShowImage( "original1", image1 ); cvShowImage( "gray1", gray1 ); cvShowImage( "histogram1", imgHistogram1 ); cvShowImage( "Canny1", canny1); CvBGStatModel* bg_model = cvCreateFGDStatModel( image ); for(;;){ image = cvQueryFrame(capture); cvUpdateBGStatModel( image, bg_model ); //Size of the histogram -1D histogram int bins = 256; int hsize[] = {bins}; //Max and min value of the histogram float max_value = 0, min_value = 0; //Value and normalized value float value; int normalized; //Ranges - grayscale 0 to 256 float xranges[] = {0, 256}; float* ranges[] = {xranges}; //Create an 8 bit single channel image to hold a grayscale version of the original picture gray = cvCreateImage(cvGetSize(image), 8, 1); cvCvtColor(image, gray, CV_BGR2GRAY); IplImage* canny = cvCreateImage(cvGetSize(gray), 8, 1 ); cvCanny( gray, canny, 55, 175, 3 );//55, 175, 3 with direct light //Planes to obtain the histogram, in this case just one IplImage* planes[] = {canny}; //Get the histogram and some info about it hist = cvCreateHist(1, hsize, CV_HIST_ARRAY, ranges,1); cvCalcHist(planes, hist, 0, NULL); cvGetMinMaxHistValue(hist, &min_value, &max_value); //printf("Minimum Histogram Value: %f, Maximum Histogram Value: %f\n", min_value, max_value); //Create an 8 bits single channel image to hold the histogram and paint it white imgHistogram = cvCreateImage(cvSize(bins, 50),8,3); cvRectangle(imgHistogram, cvPoint(0,0), cvPoint(256,50), CV_RGB(255,255,255),-1); //Draw the histogram for(int i=0; i < bins; i++){ value = cvQueryHistValue_1D(hist, i); normalized = cvRound(value*50/max_value); cvLine(imgHistogram,cvPoint(i,50), cvPoint(i,50-normalized), CV_RGB(0,0,0)); } double correlation = cvCompareHist (hist1, hist, CV_COMP_CORREL); double chisquare = cvCompareHist (hist1, hist, CV_COMP_CHISQR); double intersection = cvCompareHist (hist1, hist, CV_COMP_INTERSECT); double bhattacharyya = cvCompareHist (hist1, hist, CV_COMP_BHATTACHARYYA); double difference = (1 - correlation) + chisquare + (1 - intersection) + bhattacharyya; printf("correlation: %f\n", correlation); printf("chi-square: %f\n", chisquare); printf("intersection: %f\n", intersection); printf("bhattacharyya: %f\n", bhattacharyya); printf("difference: %f\n", difference); cvShowImage("Image Source", image); cvShowImage("gray", gray); cvShowImage("Histogram", imgHistogram); cvShowImage( "Canny", canny); cvShowImage("BG", bg_model->background); cvShowImage("FG", bg_model->foreground); //Page 19 paragraph 3 of "Learning OpenCV" tells us why we DO NOT use "cvReleaseImage(&image)" in this section cvReleaseImage(&imgHistogram); cvReleaseImage(&gray); cvReleaseHist(&hist); cvReleaseImage(&canny); char c = cvWaitKey(10); //if ASCII key 27 (esc) is pressed then loop breaks if(c==27) break; } cvReleaseBGStatModel( &bg_model ); cvReleaseImage(&image); cvReleaseCapture(&capture); cvDestroyAllWindows(); }

    Read the article

  • Custom filtering in Android using ArrayAdapter

    - by Alxandr
    I'm trying to filter my ListView which is populated with this ArrayAdapter: package me.alxandr.android.mymir.adapters; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.Iterator; import java.util.Set; import me.alxandr.android.mymir.R; import me.alxandr.android.mymir.model.Manga; import android.content.Context; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.Filter; import android.widget.SectionIndexer; import android.widget.TextView; public class MangaListAdapter extends ArrayAdapter<Manga> implements SectionIndexer { public ArrayList<Manga> items; public ArrayList<Manga> filtered; private Context context; private HashMap<String, Integer> alphaIndexer; private String[] sections = new String[0]; private Filter filter; private boolean enableSections; public MangaListAdapter(Context context, int textViewResourceId, ArrayList<Manga> items, boolean enableSections) { super(context, textViewResourceId, items); this.filtered = items; this.items = filtered; this.context = context; this.filter = new MangaNameFilter(); this.enableSections = enableSections; if(enableSections) { alphaIndexer = new HashMap<String, Integer>(); for(int i = items.size() - 1; i >= 0; i--) { Manga element = items.get(i); String firstChar = element.getName().substring(0, 1).toUpperCase(); if(firstChar.charAt(0) > 'Z' || firstChar.charAt(0) < 'A') firstChar = "@"; alphaIndexer.put(firstChar, i); } Set<String> keys = alphaIndexer.keySet(); Iterator<String> it = keys.iterator(); ArrayList<String> keyList = new ArrayList<String>(); while(it.hasNext()) keyList.add(it.next()); Collections.sort(keyList); sections = new String[keyList.size()]; keyList.toArray(sections); } } @Override public View getView(int position, View convertView, ViewGroup parent) { View v = convertView; if(v == null) { LayoutInflater vi = (LayoutInflater)context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = vi.inflate(R.layout.mangarow, null); } Manga o = items.get(position); if(o != null) { TextView tt = (TextView) v.findViewById(R.id.MangaRow_MangaName); TextView bt = (TextView) v.findViewById(R.id.MangaRow_MangaExtra); if(tt != null) tt.setText(o.getName()); if(bt != null) bt.setText(o.getLastUpdated() + " - " + o.getLatestChapter()); if(enableSections && getSectionForPosition(position) != getSectionForPosition(position + 1)) { TextView h = (TextView) v.findViewById(R.id.MangaRow_Header); h.setText(sections[getSectionForPosition(position)]); h.setVisibility(View.VISIBLE); } else { TextView h = (TextView) v.findViewById(R.id.MangaRow_Header); h.setVisibility(View.GONE); } } return v; } @Override public void notifyDataSetInvalidated() { if(enableSections) { for (int i = items.size() - 1; i >= 0; i--) { Manga element = items.get(i); String firstChar = element.getName().substring(0, 1).toUpperCase(); if(firstChar.charAt(0) > 'Z' || firstChar.charAt(0) < 'A') firstChar = "@"; alphaIndexer.put(firstChar, i); } Set<String> keys = alphaIndexer.keySet(); Iterator<String> it = keys.iterator(); ArrayList<String> keyList = new ArrayList<String>(); while (it.hasNext()) { keyList.add(it.next()); } Collections.sort(keyList); sections = new String[keyList.size()]; keyList.toArray(sections); super.notifyDataSetInvalidated(); } } public int getPositionForSection(int section) { if(!enableSections) return 0; String letter = sections[section]; return alphaIndexer.get(letter); } public int getSectionForPosition(int position) { if(!enableSections) return 0; int prevIndex = 0; for(int i = 0; i < sections.length; i++) { if(getPositionForSection(i) > position && prevIndex <= position) { prevIndex = i; break; } prevIndex = i; } return prevIndex; } public Object[] getSections() { return sections; } @Override public Filter getFilter() { if(filter == null) filter = new MangaNameFilter(); return filter; } private class MangaNameFilter extends Filter { @Override protected FilterResults performFiltering(CharSequence constraint) { // NOTE: this function is *always* called from a background thread, and // not the UI thread. constraint = constraint.toString().toLowerCase(); FilterResults result = new FilterResults(); if(constraint != null && constraint.toString().length() > 0) { ArrayList<Manga> filt = new ArrayList<Manga>(); ArrayList<Manga> lItems = new ArrayList<Manga>(); synchronized (items) { Collections.copy(lItems, items); } for(int i = 0, l = lItems.size(); i < l; i++) { Manga m = lItems.get(i); if(m.getName().toLowerCase().contains(constraint)) filt.add(m); } result.count = filt.size(); result.values = filt; } else { synchronized(items) { result.values = items; result.count = items.size(); } } return result; } @SuppressWarnings("unchecked") @Override protected void publishResults(CharSequence constraint, FilterResults results) { // NOTE: this function is *always* called from the UI thread. filtered = (ArrayList<Manga>)results.values; notifyDataSetChanged(); } } } However, when I call filter('test') on the filter nothing happens at all (or the background-thread is run, but the list isn't filtered as far as the user conserns). How can I fix this?

    Read the article

  • Remove redundant xml namespaces from soapenv:Body

    - by drachenstern
    If you can tell me the magic google term that instantly gives me clarification, that would be helpful. Here's the part that's throwing an issue when I try to manually deserialize from a string: xsi:type="ns1:errorObject" xmlns:ns1="http://www.example.org/Version_3.0" xsi:type="ns2:errorObject" xmlns:ns2="http://www.example.org/Version_3.0" xsi:type="ns3:errorObject" xmlns:ns3="http://www.example.org/Version_3.0" Here's how I'm deserializing by hand to test it: (in an aspx.cs page with a label on the front to display the value in that I can verify by reading source) (second block of XML duplicates the first but without the extra namespaces) using System; using System.IO; using System.Text; using System.Xml; using System.Xml.Serialization; public partial class test : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string sourceXml = @"<?xml version=""1.0""?> <InitiateActivityResponse xmlns=""http://www.example.org/Version_3.0""> <InitiateActivityResult> <errorObject errorString=""string 1"" eventTime=""2010-05-21T21:19:15.775Z"" nounType=""Object"" objectID=""object1"" xsi:type=""ns1:errorObject"" xmlns:ns1=""http://www.example.org/Version_3.0"" /> <errorObject errorString=""string 2"" eventTime=""2010-05-21T21:19:15.791Z"" nounType=""Object"" objectID=""object2"" xsi:type=""ns2:errorObject"" xmlns:ns2=""http://www.example.org/Version_3.0"" /> <errorObject errorString=""string 3"" eventTime=""2010-05-21T21:19:15.806Z"" nounType=""Object"" objectID=""object3"" xsi:type=""ns3:errorObject"" xmlns:ns3=""http://www.example.org/Version_3.0"" /> </InitiateActivityResult> </InitiateActivityResponse> "; sourceXml = @"<?xml version=""1.0""?> <InitiateActivityResponse xmlns=""http://www.example.org/Version_3.0""> <InitiateActivityResult> <errorObject errorString=""string 1"" eventTime=""2010-05-21T21:19:15.775Z"" nounType=""Object"" objectID=""object1"" /> <errorObject errorString=""string 2"" eventTime=""2010-05-21T21:19:15.791Z"" nounType=""Object"" objectID=""object2"" /> <errorObject errorString=""string 3"" eventTime=""2010-05-21T21:19:15.806Z"" nounType=""Object"" objectID=""object3"" /> </InitiateActivityResult> </InitiateActivityResponse> "; InitiateActivityResponse fragment = new InitiateActivityResponse(); Type t = typeof( InitiateActivityResponse ); StringBuilder sb = new StringBuilder(); TextWriter textWriter = new StringWriter( sb ); TextReader textReader = new StringReader( sourceXml ); XmlTextReader xmlTextReader = new XmlTextReader( textReader ); XmlSerializer xmlSerializer = new XmlSerializer( t ); object obj = xmlSerializer.Deserialize( xmlTextReader ); fragment = (InitiateActivityResponse)obj; xmlSerializer.Serialize( textWriter, fragment ); //I have a field on my public page that I write to from sb.ToString(); } } Consuming a webservice, I have a class like thus: (all examples foreshortened to as little as possible to show the problem, if boilerplate is missing, my apologies) (this is where I think I want to remove the troublespot) [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute( "code" )] [System.Web.Services.WebServiceBindingAttribute( Name = "MyServerSoapSoapBinding", Namespace = "http://www.example.org/Version_3.0" )] public partial class MyServer : System.Web.Services.Protocols.SoapHttpClientProtocol { public MsgHeader msgHeader { get; set; } public MyServer () { this.Url = "localAddressOmittedOnPurpose"; } [System.Web.Services.Protocols.SoapHeaderAttribute( "msgHeader" )] [System.Web.Services.Protocols.SoapDocumentMethodAttribute( "http://www.example.org/Version_3.0/InitiateActivity", RequestNamespace = "http://www.example.org/Version_3.0", ResponseNamespace = "http://www.example.org/Version_3.0", Use = System.Web.Services.Description.SoapBindingUse.Literal, ParameterStyle = System.Web.Services.Protocols.SoapParameterStyle.Wrapped )] [return: System.Xml.Serialization.XmlElementAttribute( "InitiateActivityResponse" )] public InitiateActivityResponse InitiateActivity(string inputVar) { object[] results = Invoke( "InitiateActivity", new object[] { inputVar } ); return ( (InitiateActivityResponse)( results[0] ) ); } } Class descriptions [System.SerializableAttribute] [System.Diagnostics.DebuggerStepThroughAttribute] [System.ComponentModel.DesignerCategoryAttribute( "code" )] [XmlType( Namespace = "http://www.example.org/Version_3.0", TypeName = "InitiateActivityResponse" )] [XmlRoot( Namespace = "http://www.example.org/Version_3.0" )] public class InitiateActivityResponse { [XmlArray( ElementName = "InitiateActivityResult", IsNullable = true )] [XmlArrayItem( ElementName = "errorObject", IsNullable = false )] public errorObject[] errorObject { get; set; } } [System.SerializableAttribute] [System.Diagnostics.DebuggerStepThroughAttribute] [System.ComponentModel.DesignerCategoryAttribute( "code" )] [XmlTypeAttribute( Namespace = "http://www.example.org/Version_3.0" )] public class errorObject { private string _errorString; private System.DateTime _eventTime; private bool _eventTimeSpecified; private string _nounType; private string _objectID; [XmlAttributeAttribute] public string errorString { get { return _errorString; } set { _errorString = value; } } [XmlAttributeAttribute] public System.DateTime eventTime { get { return _eventTime; } set { _eventTime = value; } } [XmlIgnoreAttribute] public bool eventTimeSpecified { get { return _eventTimeSpecified; } set { _eventTimeSpecified = value; } } [XmlAttributeAttribute] public string nounType { get { return _nounType; } set { _nounType = value; } } [XmlAttributeAttribute] public string objectID { get { return _objectID; } set { _objectID = value; } } } SOAP as it's being received (as seen by Fiddler2) <?xml version="1.0" encoding="utf-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Header> <MsgHeader soapenv:actor="http://schemas.xmlsoap.org/soap/actor/next" soapenv:mustUnderstand="0" AppName="AppName" AppVersion="1.0" Company="Company" Pwd="" UserID="" xmlns="http://www.example.org/Version_3.0"/> </soapenv:Header> <soapenv:Body> <InitiateActivityResponse xmlns="http://www.example.org/Version_3.0"> <InitiateActivityResult> <errorObject errorString="Explanatory string for request 1" eventTime="2010-05-24T21:21:37.477Z" nounType="Object" objectID="12345" xsi:type="ns1:errorObject" xmlns:ns1="http://www.example.org/Version_3.0"/> <errorObject errorString="Explanatory string for request 2" eventTime="2010-05-24T21:21:37.493Z" nounType="Object" objectID="45678" xsi:type="ns2:errorObject" xmlns:ns2="http://www.example.org/Version_3.0"/> <errorObject errorString="Explanatory string for request 3" eventTime="2010-05-24T21:21:37.508Z" nounType="Object" objectID="98765" xsi:type="ns3:errorObject" xmlns:ns3="http://www.example.org/Version_3.0"/> </InitiateActivityResult> </InitiateActivityResponse> </soapenv:Body> </soapenv:Envelope> Okay, what should I have not omitted? No I won't post the WSDL, it's hosted behind a firewall, for a vertical stack product. No, I can't change the data sender. Is this somehow automagically handled elsewhere and I just don't know what I don't know? I think I want to do some sort of message sink leading into this method, to intercept the soapenv:Body, but obviously this is for errors, so I'm not going to get errors every time. I'm not entirely sure how to handle this, but some pointers would be nice.

    Read the article

  • Linq to LLBLGen query problem

    - by Jeroen Breuer
    Hello, I've got a Stored Procedure and i'm trying to convert it to a Linq to LLBLGen query. The query in Linq to LLBGen works, but when I trace the query which is send to sql server it is far from perfect. This is the Stored Procedure: ALTER PROCEDURE [dbo].[spDIGI_GetAllUmbracoProducts] -- Add the parameters for the stored procedure. @searchText nvarchar(255), @startRowIndex int, @maximumRows int, @sortExpression nvarchar(255) AS BEGIN SET @startRowIndex = @startRowIndex + 1 SET @searchText = '%' + @searchText + '%' -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- This is the query which will fetch all the UmbracoProducts. -- This query also supports paging and sorting. WITH UmbracoOverview As ( SELECT ROW_NUMBER() OVER( ORDER BY CASE WHEN @sortExpression = 'productName' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode' THEN umbracoProduct.productCode END ASC, CASE WHEN @sortExpression = 'productName DESC' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode DESC' THEN umbracoProduct.productCode END DESC ) AS row_num, umbracoProduct.umbracoProductId, umbracoProduct.productName, umbracoProduct.productCode FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) ) SELECT UmbracoOverview.UmbracoProductId, UmbracoOverview.productName, UmbracoOverview.productCode FROM UmbracoOverview WHERE (row_num >= @startRowIndex AND row_num < (@startRowIndex + @maximumRows)) -- This query will count all the UmbracoProducts. -- This query is used for paging inside ASP.NET. SELECT COUNT (umbracoProduct.umbracoProductId) AS CountNumber FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) END This is my Linq to LLBLGen query: using System.Linq.Dynamic; var q = ( from up in MetaData.UmbracoProduct join p in MetaData.Product on up.UmbracoProductId equals p.UmbracoProductId where up.ProductCode.Contains(searchText) || up.ProductName.Contains(searchText) || p.Code.Contains(searchText) || p.Description.Contains(searchText) || p.DescriptionLong.Contains(searchText) || p.UnitCode.Contains(searchText) select new UmbracoProductOverview { UmbracoProductId = up.UmbracoProductId, ProductName = up.ProductName, ProductCode = up.ProductCode } ).OrderBy(sortExpression); //Save the count in HttpContext.Current.Items. This value will only be saved during 1 single HTTP request. HttpContext.Current.Items["AllProductsCount"] = q.Count(); //Returns the results paged. return q.Skip(startRowIndex).Take(maximumRows).ToList<UmbracoProductOverview>(); This is my Initial expression to process: value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.UmbracoProductEntity]).Join(value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.ProductEntity]), up => up.UmbracoProductId, p => p.UmbracoProductId, (up, p) => new <>f__AnonymousType0`2(up = up, p = p)).Where(<>h__TransparentIdentifier0 => (((((<>h__TransparentIdentifier0.up.ProductCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText) || <>h__TransparentIdentifier0.up.ProductName.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Code.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Description.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.DescriptionLong.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.UnitCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText))).Select(<>h__TransparentIdentifier0 => new UmbracoProductOverview() {UmbracoProductId = <>h__TransparentIdentifier0.up.UmbracoProductId, ProductName = <>h__TransparentIdentifier0.up.ProductName, ProductCode = <>h__TransparentIdentifier0.up.ProductCode}).OrderBy( => .ProductName).Count() Now this is how the queries look like that are send to sql server: Select query: Query: SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6)))) Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Count query: Query: SELECT TOP 1 COUNT(*) AS [LPAV_] FROM (SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6))))) [LPA_L1] Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". As you can see no sorting or paging is done (like in my Stored Procedure). This is probably done inside the code after all the results are fetched. This costs a lot of performance! Does anybody know how I can convert my Stored Procedure to Linq to LLBLGen the proper way?

    Read the article

  • What is the fastest cyclic synchronization in Java (ExecutorService vs. CyclicBarrier vs. X)?

    - by Alex Dunlop
    Which Java synchronization construct is likely to provide the best performance for a concurrent, iterative processing scenario with a fixed number of threads like the one outlined below? After experimenting on my own for a while (using ExecutorService and CyclicBarrier) and being somewhat surprised by the results, I would be grateful for some expert advice and maybe some new ideas. Existing questions here do not seem to focus primarily on performance, hence this new one. Thanks in advance! The core of the app is a simple iterative data processing algorithm, parallelized to the spread the computational load across 8 cores on a Mac Pro, running OS X 10.6 and Java 1.6.0_07. The data to be processed is split into 8 blocks and each block is fed to a Runnable to be executed by one of a fixed number of threads. Parallelizing the algorithm was fairly straightforward, and it functionally works as desired, but its performance is not yet what I think it could be. The app seems to spend a lot of time in system calls synchronizing, so after some profiling I wonder whether I selected the most appropriate synchronization mechanism(s). A key requirement of the algorithm is that it needs to proceed in stages, so the threads need to sync up at the end of each stage. The main thread prepares the work (very low overhead), passes it to the threads, lets them work on it, then proceeds when all threads are done, rearranges the work (again very low overhead) and repeats the cycle. The machine is dedicated to this task, Garbage Collection is minimized by using per-thread pools of pre-allocated items, and the number of threads can be fixed (no incoming requests or the like, just one thread per CPU core). V1 - ExecutorService My first implementation used an ExecutorService with 8 worker threads. The program creates 8 tasks holding the work and then lets them work on it, roughly like this: // create one thread per CPU executorService = Executors.newFixedThreadPool( 8 ); ... // now process data in cycles while( ...) { // package data into 8 work items ... // create one Callable task per work item ... // submit the Callables to the worker threads executorService.invokeAll( taskList ); } This works well functionally (it does what it should), and for very large work items indeed all 8 CPUs become highly loaded, as much as the processing algorithm would be expected to allow (some work items will finish faster than others, then idle). However, as the work items become smaller (and this is not really under the program's control), the user CPU load shrinks dramatically: blocksize | system | user | cycles/sec 256k 1.8% 85% 1.30 64k 2.5% 77% 5.6 16k 4% 64% 22.5 4096 8% 56% 86 1024 13% 38% 227 256 17% 19% 420 64 19% 17% 948 16 19% 13% 1626 Legend: - block size = size of the work item (= computational steps) - system = system load, as shown in OS X Activity Monitor (red bar) - user = user load, as shown in OS X Activity Monitor (green bar) - cycles/sec = iterations through the main while loop, more is better The primary area of concern here is the high percentage of time spent in the system, which appears to be driven by thread synchronization calls. As expected, for smaller work items, ExecutorService.invokeAll() will require relatively more effort to sync up the threads versus the amount of work being performed in each thread. But since ExecutorService is more generic than it would need to be for this use case (it can queue tasks for threads if there are more tasks than cores), I though maybe there would be a leaner synchronization construct. V2 - CyclicBarrier The next implementation used a CyclicBarrier to sync up the threads before receiving work and after completing it, roughly as follows: main() { // create the barrier barrier = new CyclicBarrier( 8 + 1 ); // create Runable for thread, tell it about the barrier Runnable task = new WorkerThreadRunnable( barrier ); // start the threads for( int i = 0; i < 8; i++ ) { // create one thread per core new Thread( task ).start(); } while( ... ) { // tell threads about the work ... // N threads + this will call await(), then system proceeds barrier.await(); // ... now worker threads work on the work... // wait for worker threads to finish barrier.await(); } } class WorkerThreadRunnable implements Runnable { CyclicBarrier barrier; WorkerThreadRunnable( CyclicBarrier barrier ) { this.barrier = barrier; } public void run() { while( true ) { // wait for work barrier.await(); // do the work ... // wait for everyone else to finish barrier.await(); } } } Again, this works well functionally (it does what it should), and for very large work items indeed all 8 CPUs become highly loaded, as before. However, as the work items become smaller, the load still shrinks dramatically: blocksize | system | user | cycles/sec 256k 1.9% 85% 1.30 64k 2.7% 78% 6.1 16k 5.5% 52% 25 4096 9% 29% 64 1024 11% 15% 117 256 12% 8% 169 64 12% 6.5% 285 16 12% 6% 377 For large work items, synchronization is negligible and the performance is identical to V1. But unexpectedly, the results of the (highly specialized) CyclicBarrier seem MUCH WORSE than those for the (generic) ExecutorService: throughput (cycles/sec) is only about 1/4th of V1. A preliminary conclusion would be that even though this seems to be the advertised ideal use case for CyclicBarrier, it performs much worse than the generic ExecutorService. V3 - Wait/Notify + CyclicBarrier It seemed worth a try to replace the first cyclic barrier await() with a simple wait/notify mechanism: main() { // create the barrier // create Runable for thread, tell it about the barrier // start the threads while( ... ) { // tell threads about the work // for each: workerThreadRunnable.setWorkItem( ... ); // ... now worker threads work on the work... // wait for worker threads to finish barrier.await(); } } class WorkerThreadRunnable implements Runnable { CyclicBarrier barrier; @NotNull volatile private Callable<Integer> workItem; WorkerThreadRunnable( CyclicBarrier barrier ) { this.barrier = barrier; this.workItem = NO_WORK; } final protected void setWorkItem( @NotNull final Callable<Integer> callable ) { synchronized( this ) { workItem = callable; notify(); } } public void run() { while( true ) { // wait for work while( true ) { synchronized( this ) { if( workItem != NO_WORK ) break; try { wait(); } catch( InterruptedException e ) { e.printStackTrace(); } } } // do the work ... // wait for everyone else to finish barrier.await(); } } } Again, this works well functionally (it does what it should). blocksize | system | user | cycles/sec 256k 1.9% 85% 1.30 64k 2.4% 80% 6.3 16k 4.6% 60% 30.1 4096 8.6% 41% 98.5 1024 12% 23% 202 256 14% 11.6% 299 64 14% 10.0% 518 16 14.8% 8.7% 679 The throughput for small work items is still much worse than that of the ExecutorService, but about 2x that of the CyclicBarrier. Eliminating one CyclicBarrier eliminates half of the gap. V4 - Busy wait instead of wait/notify Since this app is the primary one running on the system and the cores idle anyway if they're not busy with a work item, why not try a busy wait for work items in each thread, even if that spins the CPU needlessly. The worker thread code changes as follows: class WorkerThreadRunnable implements Runnable { // as before final protected void setWorkItem( @NotNull final Callable<Integer> callable ) { workItem = callable; } public void run() { while( true ) { // busy-wait for work while( true ) { if( workItem != NO_WORK ) break; } // do the work ... // wait for everyone else to finish barrier.await(); } } } Also works well functionally (it does what it should). blocksize | system | user | cycles/sec 256k 1.9% 85% 1.30 64k 2.2% 81% 6.3 16k 4.2% 62% 33 4096 7.5% 40% 107 1024 10.4% 23% 210 256 12.0% 12.0% 310 64 11.9% 10.2% 550 16 12.2% 8.6% 741 For small work items, this increases throughput by a further 10% over the CyclicBarrier + wait/notify variant, which is not insignificant. But it is still much lower-throughput than V1 with the ExecutorService. V5 - ? So what is the best synchronization mechanism for such a (presumably not uncommon) problem? I am weary of writing my own sync mechanism to completely replace ExecutorService (assuming that it is too generic and there has to be something that can still be taken out to make it more efficient). It is not my area of expertise and I'm concerned that I'd spend a lot of time debugging it (since I'm not even sure my wait/notify and busy wait variants are correct) for uncertain gain. Any advice would be greatly appreciated.

    Read the article

  • IText can't keep rows together, second row spans multiple pages but won't stick with first row.

    - by J2SE31
    I am having trouble keeping my first and second rows of my main PDFPTable together using IText. My first row consists of a PDFPTable with some basic search criteria. My second row consists of a PdfPTable that contains all of the tabulated results. Everytime the tabulated results becomes too big and spans multiple pages, it is kicked to the second page automatically rather than showing up directly below the search criteria and then paging to the next page. How can I avoid this problem? I have tried using setSplitRows(false), but I simply get a blank document (see commented lines 117 and 170). How can I keep my tabulated data (second row) up on the first page? An example of my code is shown below (you should be able to just copy/paste). public class TestHelper{ private TestEventHelper helper; public TestHelper(){ super(); helper = new TestEventHelper(); } public TestEventHelper getHelper() { return helper; } public void setHelper(TestEventHelper helper) { this.helper = helper; } public static void main(String[] args){ TestHelper test = new TestHelper(); TestEventHelper helper = test.getHelper(); FileOutputStream file = null; Document document = null; PdfWriter writer = null; try { file = new FileOutputStream(new File("C://Documents and Settings//All Users//Desktop//pdffile2.pdf")); document = new Document(PageSize.A4.rotate(), 36, 36, 36, 36); writer = PdfWriter.getInstance(document, file); // writer.setPageEvent(templateHelper); writer.setPdfVersion(PdfWriter.PDF_VERSION_1_7); writer.setUserunit(1f); document.open(); List<Element> pages = null; try { pages = helper.createTemplate(); } catch (Exception e) { e.printStackTrace(); } Iterator<Element> iterator = pages.iterator(); while (iterator.hasNext()) { Element element = iterator.next(); if (element instanceof Phrase) { document.newPage(); } else { document.add(element); } } } catch (Exception de) { de.printStackTrace(); // log.debug("Exception " + de + " " + de.getMessage()); } finally { if (document != null) { document.close(); } if (writer != null) { writer.close(); } } System.out.println("Done!"); } private class TestEventHelper extends PdfPageEventHelper{ // The PdfTemplate that contains the total number of pages. protected PdfTemplate total; protected BaseFont helv; private static final float SMALL_MARGIN = 20f; private static final float MARGIN = 36f; private final Font font = new Font(Font.HELVETICA, 12, Font.BOLD); private final Font font2 = new Font(Font.HELVETICA, 10, Font.BOLD); private final Font smallFont = new Font(Font.HELVETICA, 10, Font.NORMAL); private String[] datatableHeaderFields = new String[]{"Header1", "Header2", "Header3", "Header4", "Header5", "Header6", "Header7", "Header8", "Header9"}; public TestEventHelper(){ super(); } public List<Element> createTemplate() throws Exception { List<Element> elementList = new ArrayList<Element>(); float[] tableWidths = new float[]{1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.25f, 1.25f, 1.25f, 1.25f}; // logger.debug("entering create reports template..."); PdfPTable splitTable = new PdfPTable(1); splitTable.setSplitRows(false); splitTable.setWidthPercentage(100f); PdfPTable pageTable = new PdfPTable(1); pageTable.setKeepTogether(true); pageTable.setWidthPercentage(100f); PdfPTable searchTable = generateSearchFields(); if(searchTable != null){ searchTable.setSpacingAfter(25f); } PdfPTable outlineTable = new PdfPTable(1); outlineTable.setKeepTogether(true); outlineTable.setWidthPercentage(100f); PdfPTable datatable = new PdfPTable(datatableHeaderFields.length); datatable.setKeepTogether(false); datatable.setWidths(tableWidths); generateDatatableHeader(datatable); for(int i = 0; i < 100; i++){ addCell(datatable, String.valueOf(i), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+1), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+2), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+3), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+4), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+5), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+6), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+7), 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, smallFont, true); addCell(datatable, String.valueOf(i+8), 1, Rectangle.NO_BORDER, Element.ALIGN_RIGHT, smallFont, true); } PdfPCell dataCell = new PdfPCell(datatable); dataCell.setBorder(Rectangle.BOX); outlineTable.addCell(dataCell); PdfPCell searchCell = new PdfPCell(searchTable); searchCell.setVerticalAlignment(Element.ALIGN_TOP); PdfPCell outlineCell = new PdfPCell(outlineTable); outlineCell.setVerticalAlignment(Element.ALIGN_TOP); addCell(pageTable, searchCell, 1, Rectangle.NO_BORDER, Element.ALIGN_LEFT, null, null); addCell(pageTable, outlineCell, 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, null, null); PdfPCell pageCell = new PdfPCell(pageTable); pageCell.setVerticalAlignment(Element.ALIGN_TOP); addCell(splitTable, pageCell, 1, Rectangle.NO_BORDER, Element.ALIGN_CENTER, null, null); elementList.add(pageTable); // elementList.add(splitTable); return elementList; } public void onOpenDocument(PdfWriter writer, Document document) { total = writer.getDirectContent().createTemplate(100, 100); total.setBoundingBox(new Rectangle(-20, -20, 100, 100)); try { helv = BaseFont.createFont(BaseFont.HELVETICA, BaseFont.WINANSI, BaseFont.NOT_EMBEDDED); } catch (Exception e) { throw new ExceptionConverter(e); } } public void onEndPage(PdfWriter writer, Document document) { //TODO } public void onCloseDocument(PdfWriter writer, Document document) { total.beginText(); total.setFontAndSize(helv, 10); total.setTextMatrix(0, 0); total.showText(String.valueOf(writer.getPageNumber() - 1)); total.endText(); } private PdfPTable generateSearchFields(){ PdfPTable searchTable = new PdfPTable(2); for(int i = 0; i < 6; i++){ addCell(searchTable, "Search Key" +i, 1, Rectangle.NO_BORDER, Element.ALIGN_RIGHT, font2, MARGIN, true); addCell(searchTable, "Search Value +i", 1, Rectangle.NO_BORDER, Element.ALIGN_LEFT, smallFont, null, true); } return searchTable; } private void generateDatatableHeader(PdfPTable datatable) { if (datatableHeaderFields != null && datatableHeaderFields.length != 0) { for (int i = 0; i < datatableHeaderFields.length; i++) { addCell(datatable, datatableHeaderFields[i], 1, Rectangle.BOX, Element.ALIGN_CENTER, font2); } } } private PdfPCell addCell(PdfPTable table, String cellContent, int colspan, int cellBorder, int horizontalAlignment, Font font) { return addCell(table, cellContent, colspan, cellBorder, horizontalAlignment, font, null, null); } private PdfPCell addCell(PdfPTable table, String cellContent, int colspan, int cellBorder, int horizontalAlignment, Font font, Boolean noWrap) { return addCell(table, cellContent, colspan, cellBorder, horizontalAlignment, font, null, noWrap); } private PdfPCell addCell(PdfPTable table, String cellContent, Integer colspan, Integer cellBorder, Integer horizontalAlignment, Font font, Float paddingLeft, Boolean noWrap) { PdfPCell cell = new PdfPCell(new Phrase(cellContent, font)); return addCell(table, cell, colspan, cellBorder, horizontalAlignment, paddingLeft, noWrap); } private PdfPCell addCell(PdfPTable table, PdfPCell cell, int colspan, int cellBorder, int horizontalAlignment, Float paddingLeft, Boolean noWrap) { cell.setColspan(colspan); cell.setBorder(cellBorder); cell.setHorizontalAlignment(horizontalAlignment); if(paddingLeft != null){ cell.setPaddingLeft(paddingLeft); } if(noWrap != null){ cell.setNoWrap(noWrap); } table.addCell(cell); return cell; } } }

    Read the article

  • Mysql - help me optimize this query

    - by sandeepan-nath
    About the system: -The system has a total of 8 tables - Users - Tutor_Details (Tutors are a type of User,Tutor_Details table is linked to Users) - learning_packs, (stores packs created by tutors) - learning_packs_tag_relations, (holds tag relations meant for search) - tutors_tag_relations and tags and orders (containing purchase details of tutor's packs), order_details linked to orders and tutor_details. For a more clear idea about the tables involved please check the The tables section in the end. -A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is a simpler representation (not the actual) of the more complex query which I am trying to optimize:- I have used statements like explanation of parts in the query select SUM(DISTINCT( t.tag LIKE "%Dictatorship%" )) as key_1_total_matches, SUM(DISTINCT( t.tag LIKE "%democracy%" )) as key_2_total_matches, td., u., count(distinct(od.id_od)), if (lp.id_lp > 0) then some conditional logic on lp fields else 0 as tutor_popularity from Tutor_Details AS td JOIN Users as u on u.id_user = td.id_user LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN Learning_Packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN `some other tables on lp.id_lp - let's call learning pack tables set (including Learning_Packs table)` LEFT JOIN Order_Details as od on td.id_tutor = od.id_author LEFT JOIN Orders as o on od.id_order = o.id_order LEFT JOIN Tutors_Tag_Relations as ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN Tags as t on (t.id_tag = ttagrels.id_tag) OR (t.id_tag = lptagrels.id_tag) where some condition on Users table's fields AND CASE WHEN ((t.id_tag = lptagrels.id_tag) AND (lp.id_lp 0)) THEN `some conditions on learning pack tables set` ELSE 1 END AND CASE WHEN ((t.id_tag = wtagrels.id_tag) AND (wc.id_wc 0)) THEN `some conditions on webclasses tables set` ELSE 1 END AND CASE WHEN (od.id_od0) THEN od.id_author = td.id_tutor and some conditions on Orders table's fields ELSE 1 END AND ( t.tag LIKE "%Dictatorship%" OR t.tag LIKE "%democracy%") group by td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 order by tutor_popularity desc, u.surname asc, u.name asc limit 0,20 ===================================================================== What does the above query do? Does AND logic search on the search keywords (2 in this example - "Democracy" and "Dictatorship"). Returns only those tutors for which both the keywords are present in the union of the two sets - tutors details and details of all the packs created by a tutor. To make things clear - Suppose a Tutor name "Sandeepan Nath" has created a pack "My first pack", then:- Searching "Sandeepan Nath" returns Sandeepan Nath. Searching "Sandeepan first" returns Sandeepan Nath. Searching "Sandeepan second" does not return Sandeepan Nath. ====================================================================================== The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query on heavily loaded databases is like 25 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. It is possible that some of the delay is being caused because all the possible fields have not yet been indexed, but I would appreciate a better query as a solution, optimized as much as possible, displaying the same results ========================================================================================== How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. ==================================================================== The tables Most of the following tables contain many other fields which I have omitted here. CREATE TABLE IF NOT EXISTS users ( id_user int(10) unsigned NOT NULL AUTO_INCREMENT, name varchar(100) NOT NULL DEFAULT '', surname varchar(155) NOT NULL DEFAULT '', PRIMARY KEY (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=636 ; CREATE TABLE IF NOT EXISTS tutor_details ( id_tutor int(10) NOT NULL AUTO_INCREMENT, id_user int(10) NOT NULL DEFAULT '0', PRIMARY KEY (id_tutor), KEY Users_FKIndex1 (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=51 ; CREATE TABLE IF NOT EXISTS orders ( id_order int(10) unsigned NOT NULL AUTO_INCREMENT, PRIMARY KEY (id_order), KEY Orders_FKIndex1 (id_user), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=275 ; ALTER TABLE orders ADD CONSTRAINT Orders_ibfk_1 FOREIGN KEY (id_user) REFERENCES users (id_user) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS order_details ( id_od int(10) unsigned NOT NULL AUTO_INCREMENT, id_order int(10) unsigned NOT NULL DEFAULT '0', id_author int(10) NOT NULL DEFAULT '0', PRIMARY KEY (id_od), KEY Order_Details_FKIndex1 (id_order) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=284 ; ALTER TABLE order_details ADD CONSTRAINT Order_Details_ibfk_1 FOREIGN KEY (id_order) REFERENCES orders (id_order) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS learning_packs ( id_lp int(10) unsigned NOT NULL AUTO_INCREMENT, id_author int(10) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (id_lp), KEY Learning_Packs_FKIndex2 (id_author), KEY id_lp (id_lp) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=23 ; CREATE TABLE IF NOT EXISTS tags ( id_tag int(10) unsigned NOT NULL AUTO_INCREMENT, tag varchar(255) DEFAULT NULL, PRIMARY KEY (id_tag), UNIQUE KEY tag (tag), KEY id_tag (id_tag), KEY tag_2 (tag), KEY tag_3 (tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3419 ; CREATE TABLE IF NOT EXISTS tutors_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, KEY Tutors_Tag_Relations (id_tag), KEY id_tutor (id_tutor), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ALTER TABLE tutors_tag_relations ADD CONSTRAINT Tutors_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS learning_packs_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, id_lp int(10) unsigned DEFAULT NULL, KEY Learning_Packs_Tag_Relations_FKIndex1 (id_tag), KEY id_lp (id_lp), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ALTER TABLE learning_packs_tag_relations ADD CONSTRAINT Learning_Packs_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; =================================================================================== Following is the exact query (this includes classes also - tutors can create classes and search terms are matched with classes created by tutors):- select count(distinct(od.id_od)) as tutor_popularity, CASE WHEN (IF((wc.id_wc 0), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND (wccp.country_code='IE' or wccp.country_code IN ('INT'))), 0)) THEN 1 ELSE 0 END as 'classes_published', CASE WHEN (IF((lp.id_lp 0), (lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND (lpcp.country_code='IE' or lpcp.country_code IN ('INT'))),0)) THEN 1 ELSE 0 END as 'packs_published', td . * , u . * from Tutor_Details AS td JOIN Users as u on u.id_user = td.id_user LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN Learning_Packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN Learning_Packs_Categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN Learning_Packs_Categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN Learning_Pack_Content as lpct on (lp.id_lp = lpct.id_lp) LEFT JOIN Webclasses_Tag_Relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN WebClasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN Learning_Packs_Categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN Learning_Packs_Categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN Order_Details as od on td.id_tutor = od.id_author LEFT JOIN Orders as o on od.id_order = o.id_order LEFT JOIN Tutors_Tag_Relations as ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN Tags as t on (t.id_tag = ttagrels.id_tag) OR (t.id_tag = lptagrels.id_tag) OR (t.id_tag = wtagrels.id_tag) where (u.country='IE' or u.country IN ('INT')) AND CASE WHEN ((t.id_tag = lptagrels.id_tag) AND (lp.id_lp 0)) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND (lpcp.country_code='IE' or lpcp.country_code IN ('INT')) ELSE 1 END AND CASE WHEN ((t.id_tag = wtagrels.id_tag) AND (wc.id_wc 0)) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND (wccp.country_code='IE' or wccp.country_code IN ('INT')) ELSE 1 END AND CASE WHEN (od.id_od0) THEN od.id_author = td.id_tutor and o.order_status = 'paid' and CASE WHEN (od.id_wc 0) THEN od.can_attend_class=1 ELSE 1 END ELSE 1 END AND 1 group by td.id_tutor order by tutor_popularity desc, u.surname asc, u.name asc limit 0,20 Please note - The provided database structure does not show all the fields and tables as in this query

    Read the article

  • Ant build classpath jar generates "error in opening zip file"

    - by Uberpuppy
    I have a project built in eclipse with a dependencies on 3rd party jars. I'm trying to generate a suitable build file for ant - using eclipses built-in export-ant buildfile feature as a starting block. When I run the build target I get the following error: [javac] error: error reading /base/repo/FabTrace/lib/apache/geronimo/specs/geronimo-j2ee-management_1.0_spec/1.0/geronimo-j2ee-management_1.0_spec-1.0.jar; error in opening zip file And the whole build file (auto-generated by eclipse) looks like this: (NB: the error above always references the first jar listed in the classpath) <project basedir="." default="build" name="FabTrace"> <property environment="env"/> <property name="ECLIPSE_HOME" value="/opt/apps/eclipse"/> <property name="debuglevel" value="source,lines,vars"/> <property name="target" value="1.5"/> <property name="source" value="1.5"/> <path id="JUnit 4.libraryclasspath"> <pathelement location="${ECLIPSE_HOME}/plugins/org.junit4_4.5.0.v20090824/junit.jar"/> <pathelement location="${ECLIPSE_HOME}/plugins/org.hamcrest.core_1.1.0.v20090501071000.jar"/> </path> <path id="FabTrace.classpath"> <pathelement location="bin"/> <pathelement location="lib/apache/geronimo/specs/geronimo-j2ee-management_1.0_spec/1.0/geronimo-j2ee-management_1.0_spec-1.0.jar"/> <pathelement location="lib/apache/geronimo/specs/geronimo-jms_1.1_spec/1.0/geronimo-jms_1.1_spec-1.0.jar"/> <pathelement location="lib/commons-collections/commons-collections/3.2/commons-collections-3.2.jar"/> <pathelement location="lib/commons-io/commons-io/1.4/commons-io-1.4.jar"/> <pathelement location="lib/commons-lang/commons-lang/2.1/commons-lang-2.1.jar"/> <pathelement location="lib/commons-logging/commons-logging/1.1/commons-logging-1.1.jar"/> <pathelement location="lib/commons-logging/commons-logging-api/1.1/commons-logging-api-1.1.jar"/> <pathelement location="lib/javax/activation/activation/1.1/activation-1.1.jar"/> <pathelement location="lib/javax/jms/jms/1.1/jms-1.1.jar"/> <pathelement location="lib/javax/mail/mail/1.4/mail-1.4.jar"/> <pathelement location="lib/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar"/> <pathelement location="lib/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar"/> <pathelement location="lib/junit/junit/4.4/junit-4.4.jar"/> <pathelement location="lib/log4j/log4j/1.2.15/log4j-1.2.15.jar"/> <pathelement location="lib/apache/camel/camel-jms-2.0-M1.jar"/> <pathelement location="lib/spring/spring-2.5.6.jar"/> <pathelement location="lib/apache/camel/camel-bundle-2.0-M1.jar"/> <pathelement location="lib/backport-util-concurrent/backport-util-concurrent-3.1.jar"/> <pathelement location="lib/commons-pool/commons-pool-1.4.jar"/> <pathelement location="lib/apache/camel/camel-activemq-1.1.0.jar"/> <pathelement location="lib/apache/activemq/activemq-camel-5.2.0.jar"/> <pathelement location="lib/jencks/jencks-2.2-all.jar"/> <pathelement location="lib/jencks/jencks-amqpool-2.2.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/activemq-all-5.3.1.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/xbean-spring-3.6.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/activemq-core-5.3.1.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/camel-jetty-2.2.0.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/jetty-6.1.9.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/jetty-util-6.1.9.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/jetty-xbean-6.1.9.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/activemq-optional-5.3.1.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/geronimo-servlet_2.5_spec-1.2.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/spring-beans-2.5.6.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/spring-context-2.5.6.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/spring-core-2.5.6.jar"/> <path refid="JUnit 4.libraryclasspath"/> </path> <target name="init"> <mkdir dir="bin"/> <copy includeemptydirs="false" todir="bin"> <fileset dir="src/main/java"> <exclude name="**/*.launch"/> <exclude name="**/*.java"/> </fileset> </copy> <copy includeemptydirs="false" todir="bin"> <fileset dir="src/test/java"> <exclude name="**/*.launch"/> <exclude name="**/*.java"/> </fileset> </copy> <copy includeemptydirs="false" todir="bin"> <fileset dir="config"> <exclude name="**/*.launch"/> <exclude name="**/*.java"/> </fileset> </copy> </target> <target name="clean"> <delete dir="bin"/> </target> <target depends="clean" name="cleanall"/> <target depends="build-subprojects,build-project" name="build"/> <target name="build-subprojects"/> <target depends="init" name="build-project"> <echo message="${ant.project.name}: ${ant.file}"/> <javac debug="true" debuglevel="${debuglevel}" destdir="bin" source="${source}" target="${target}"> <src path="src/main/java"/> <classpath refid="FabTrace.classpath"/> </javac> <javac debug="true" debuglevel="${debuglevel}" destdir="bin" source="${source}" target="${target}"> <src path="src/test/java"/> <classpath refid="FabTrace.classpath"/> </javac> <javac debug="true" debuglevel="${debuglevel}" destdir="bin" source="${source}" target="${target}"> <src path="config"/> <classpath refid="FabTrace.classpath"/> </javac> </target> </project> (I know there's eclipse specific stuff in here. But I get the same results with or without it.) I've done ye old google search and trawled around without success. I can confirm that all the jars do really exist. I've also tried from the commandline and as sudo - again, same results. Any help would be greatly appreciated. Cheers

    Read the article

  • how to put result in the end using jquery

    - by Martty Rox
    my code is : html with the js in section please ch3eck it out i need to prodeuce the result of the form of five questions and display it in the last section of the page where am i going wrong am using innerhtml js thing to produce the result in the last section <div id="section1"> <script type="text/javascript"> function changeText2(){ alert("working"); var count1=0; var a=document.forms["myForm"]["drop1"].value; var b=document.forms["myForm"]["drop2"].value; alert(document.forms["myForm"]["drop2"].value); var c=document.forms["myForm"]["drop3"].value; var d=document.forms["myForm"]["drop4"].value; var e=document.forms["myForm"]["drop5"].value; var f=document.forms["myForm"]["drop6"].value; if(a===2) { count1++; alert(count1); } else { alert("lit"); } if(b===2) { count1++; } else { alert("lit"); } if(c===2) { count1++; } else { alert("lit"); } if(d===2) { count1++; } else { alert("lit"); } if(e===2) { count1++; } else alert("lit"); } alert(count1); document.getElementById('boldStuff2').innerHTML = count1; </script> <form name="myForm"> <p>1)&#x00A0;&#x00A0;Who won the 1993 &#x201C;King of the Ring&#x201D;?</p> <div> <select id="f1" name="drop1"> <option value="0" selected="selected">-- Select -- </option> <option value="1">Owen Hart </option> <option value="2">Bret Hart </option> <option value="3">Edge </option> <option value="4">Mabel </option> </select> </div> <!--que1--> <p>2)&#x00A0;&#x00A0;What NHL goaltender has the most career wins?</p> <div> <select id="f2" name="drop2"> <option value="0" selected="selected">-- Select -- </option> <option value="1">Grant Fuhr </option> <option value="2">Patrick Roy </option> <option value="3">Chris Osgood </option> <option value="4">Mike Vernon</option> </select> </div> <!--que2--> <p>3)&#x00A0;&#x00A0;What Major League Baseball player holds the record for all-time career high batting average?</p> <div> <select id="f3" name="drop3"> <option value="0" selected="selected">-- Select -- </option> <option value="1">Ty Cobb </option> <option value="2">Larry Walker </option> <option value="3">Jeff Bagwell </option> <option value="4">Frank Thomas</option> </select> </div> <!--que3--> <p>4)&#x00A0;&#x00A0;Who among the following is NOT associated with billiards in India?</p> <div> <select id="f4" name="drop4" > <option value="0" selected="selected">-- Select -- </option> <option value="1">Subash Agrawal </option> <option value="2">Ashok Shandilya </option> <option value="3">Manoj Kothari </option> <option value="4">Mihir Sen</option> </select> </div> <!--que4--> <p>5)&#x00A0;&#x00A0;Which cricketer died on the field in Bangladesh while playing for Abahani Club?</p> <div> <select id="f5" name="drop5" > <option value="0" selected="selected">-- Select -- </option> <option value="1">Subhash Gupte</option> <option value="2">M.L.Jaisimha</option> <option value="3">Lala Amarnath</option> <option value="4">Raman Lamba</option> </select> </div> <!--que5--> <a href="#services" class="page_nav_btn next"><input type='button' onclick='changeText2()' value='NEXT'/></a> </form> </div> <div id="section2"> </div> ... <div id="results"> <b id='boldStuff2'>fff ggg</b> </div> need to display the results of each section at the last div as shown in script... js for first section not working plz some help me where am i going wrong....

    Read the article

  • Flash and Google Maps - Only Last Icon showing

    - by Peter
    I have a simple Map and geocoding sample in Flash using CS4 The problem is simple - I can retrieve a short list from the google search api, but when I try to generate the icons on the map using a loop, only the last icon is displayed. (ignore the house icon, it is generated earlier) I feel I am missing something or made a stupid AS3 mistake (like treating it as if it was c#) - or even a stupid wood-for-the-trees mistake. The problem is in the last line of the code. I have added all my code just in case somebody else can find a use for it - lord knows it took me a great while to figure this out :) It runs here (also, if anybody has an idea why the icon is slightly in the wrong place on render, but corrects if you move the map - please let me know) Any help would be great. Thanks. P import com.google.maps.services.ClientGeocoder; import com.google.maps.services.GeocodingEvent; import com.google.maps.LatLng; import com.google.maps.Map; import com.google.maps.MapEvent; import com.google.maps.MapType; import com.google.maps.overlays.Marker; import com.google.maps.overlays.MarkerOptions; import com.google.maps.styles.FillStyle; import com.google.maps.styles.StrokeStyle; import com.google.maps.controls.* import com.google.maps.overlays.* import flash.display.Bitmap; import flash.display.BitmapData; import com.adobe.utils.StringUtil; import be.boulevart.google.ajaxapi.search.GoogleSearchResult; import be.boulevart.google.events.GoogleApiEvent; import be.boulevart.google.ajaxapi.search.local.GoogleLocalSearch; import be.boulevart.google.ajaxapi.search.local.data.GoogleLocalSearchItem; var strZip:String = new String(); strZip="60661"; var strAddress:String = new String(); strAddress ="100 W. Jackson Blvd, chicago, IL 60661"; var IconArray:Array = new Array; var SearchArray:Array = new Array; /*-------------------------------------------------------------- // The returned search data gets placed into this array ---------------------------------------------------------------*/ var LocalInfo:Array = new Array(); var intCount:int = new int; var intMapReady:int=0; /*=================================================================================== We load the map first and then get the search criteria - this will keep the order of operation clean. The ====================================================================================*/ var map:Map = new Map(); map.key = "ABQIAAAAHwSPp7Lhew456ffD6qa2WmxT_VwdLJEfmcCgytxKjcH1jLKkiihQtfC- TbcwryvBQYhRwHWa8F_Gp9Q"; map.setSize(new Point(600, 550)); map.addEventListener(MapEvent.MAP_READY, onMapReady); //Places the map on the page this.addChild(map); map.x=5; map.y=5; function onMapReady(event:Event):void { //Center the map and place the house marker doGeocode(); } /*========================================================================== Goecode to return the LAT and LONG for the specific address, center the map and add the house icon ===========================================================================*/ function doGeocode() { var geocoder:ClientGeocoder = new ClientGeocoder(); geocoder.addEventListener(GeocodingEvent.GEOCODING_SUCCESS, function(event:GeocodingEvent):void { var objPlacemarks:Array = event.response.placemarks; if (objPlacemarks.length > 0) { map.setCenter(objPlacemarks[0].point, 14, MapType.NORMAL_MAP_TYPE); var request:URLRequest = new URLRequest("house.png"); var imageLoader:Loader = new Loader(); imageLoader.load(request); var objMarkerOptions:MarkerOptions = new MarkerOptions(); objMarkerOptions.icon=imageLoader; objMarkerOptions.icon.scaleX=.15; objMarkerOptions.icon.scaleY=.15; objMarkerOptions.iconAlignment = MarkerOptions.ALIGN_HORIZONTAL_CENTER + MarkerOptions.ALIGN_VERTICAL_CENTER; var objMarker:Marker = new Marker(objPlacemarks[0].point, objMarkerOptions); map.addOverlay(objMarker); doLoadSearch() } }); //Failure code - good practice, really geocoder.addEventListener(GeocodingEvent.GEOCODING_FAILURE, function(event:GeocodingEvent):void { txtResult.appendText("Geocoding failed"); }); // generate geocode geocoder.geocode(strAddress); } /*=============================================================== XML Loader - loads icon file and search text pair from xml file =================================================================*/ function doLoadSearch() { var xmlLoader:URLLoader = new URLLoader(); var xmlData:XML = new XML(); xmlLoader.addEventListener(Event.COMPLETE, LoadXML); xmlLoader.load(new URLRequest("config.xml")); function LoadXML(e:Event):void { xmlData = new XML(e.target.data); RetrieveSearch(); } function RetrieveSearch() { //extract the MapData subset var xmlSearch = xmlData.MapData; // push this to an xml list object var xmlChildren:XMLList = xmlSearch.children(); //loop the list and extract the data into an //array of formatted search criteria for each (var Search:XML in xmlChildren) { txtResult.appendText("Searching For: "+Search.Criteria+" Icon=" + Search.Icon+ "Zip=" + strZip +"\r\n\r\n"); //retrieve search criteria loadLocalInfo(Search.Criteria,Search.Icon,strZip); } } } /*================================================================================== Search Functionality - does a google API search and loads the lats and longs required to place the icons on the map - THIS WILL NOT RUN LOCALLY ===================================================================================*/ function loadLocalInfo(strSearch,strIcon,strZip) { var objLocal:GoogleLocalSearch=new GoogleLocalSearch() objLocal.search(strSearch+" "+strZip,0,"0,0","","") objLocal.addEventListener(GoogleApiEvent.LOCAL_SEARCH_RESULT,onSearchComplete) function onSearchComplete(e:GoogleApiEvent):void { var resulta:GoogleSearchResult=e.data as GoogleSearchResult; //------------------------------------------------ // Load the icon for this particular search //------------------------------------------------ var request:URLRequest = new URLRequest(strIcon); var imageLoader:Loader = new Loader(); imageLoader.load(request); //------------------------------------------------------------- // For test purposes txtResult.appendText("Result Count for "+strSearch+" = "+e.data.results.length+"\r\n\r\n"); for each (var result:GoogleLocalSearchItem in e.data.results as Array) { LocalInfo[intCount]=[String(result.title),strIcon,String(result.latitude),String(result.longitude)]; //--------------------------------------- // Pop the icon onto the map //--------------------------------------- var objLatLng:LatLng = new LatLng(parseFloat(result.latitude), parseFloat(result.longitude)); var objMarkerOptions:MarkerOptions = new MarkerOptions(); objMarkerOptions.icon=imageLoader; objMarkerOptions.hasShadow=false; objMarkerOptions.iconAlignment = MarkerOptions.ALIGN_HORIZONTAL_CENTER + MarkerOptions.ALIGN_VERTICAL_CENTER; var objMarker:Marker = new Marker(objLatLng, objMarkerOptions); /********************************************************** *Everything* works to here - I have traced out execution and all variables. It only works on the last item in the array :( ***********************************************************/ map.addOverlay(objMarker); } } }

    Read the article

  • Android: restful API service

    - by Martyn
    Hey, I'm looking to make a service which I can use to make calls to a web based rest api. I've spent a couple of days looking through stackoverflow.com, reading books and looking at articles whilst playing about with some code and I can't get anything which I'm happy with. Basically I want to start a service on app init then I want to be able to ask that service to request a url and return the results. In the meantime I want to be able to display a progress window or something similar. I've created a service currently which uses IDL, I've read somewhere that you only really need this for cross app communication, so think these needs stripping out but unsure how to do callbacks without it. Also when I hit the post(Config.getURL("login"), values) the app seems to pause for a while (seems weird - thought the idea behind a service was that it runs on a different thread!) Currently I have a service with post and get http methods inside, a couple of AIDL files (for two way communication), a ServiceManager which deals with starting, stopping, binding etc to the service and I'm dynamically creating a Handler with specific code for the callbacks as needed. I don't want anyone to give me a complete code base to work on, but some pointers would be greatly appreciated; even if it's to say I'm doing it completely wrong. I'm pretty new to Android and Java dev so if there are any blindingly obvious mistakes here - please don't think I'm a rubbish developer, I'm just wet behind the ears and would appreciate being told exactly where I'm going wrong. Anyway, code in (mostly) full (really didn't want to put this much code here, but I don't know where I'm going wrong - apologies in advance): public class RestfulAPIService extends Service { final RemoteCallbackList<IRemoteServiceCallback> mCallbacks = new RemoteCallbackList<IRemoteServiceCallback>(); public void onStart(Intent intent, int startId) { super.onStart(intent, startId); } public IBinder onBind(Intent intent) { return binder; } public void onCreate() { super.onCreate(); } public void onDestroy() { super.onDestroy(); mCallbacks.kill(); } private final IRestfulService.Stub binder = new IRestfulService.Stub() { public void doLogin(String username, String password) { Message msg = new Message(); Bundle data = new Bundle(); HashMap<String, String> values = new HashMap<String, String>(); values.put("username", username); values.put("password", password); String result = post(Config.getURL("login"), values); data.putString("response", result); msg.setData(data); msg.what = Config.ACTION_LOGIN; mHandler.sendMessage(msg); } public void registerCallback(IRemoteServiceCallback cb) { if (cb != null) mCallbacks.register(cb); } }; private final Handler mHandler = new Handler() { public void handleMessage(Message msg) { // Broadcast to all clients the new value. final int N = mCallbacks.beginBroadcast(); for (int i = 0; i < N; i++) { try { switch (msg.what) { case Config.ACTION_LOGIN: mCallbacks.getBroadcastItem(i).userLogIn( msg.getData().getString("response")); break; default: super.handleMessage(msg); return; } } catch (RemoteException e) { } } mCallbacks.finishBroadcast(); } public String post(String url, HashMap<String, String> namePairs) {...} public String get(String url) {...} }; A couple of AIDL files: package com.something.android oneway interface IRemoteServiceCallback { void userLogIn(String result); } and package com.something.android import com.something.android.IRemoteServiceCallback; interface IRestfulService { void doLogin(in String username, in String password); void registerCallback(IRemoteServiceCallback cb); } and the service manager: public class ServiceManager { final RemoteCallbackList<IRemoteServiceCallback> mCallbacks = new RemoteCallbackList<IRemoteServiceCallback>(); public IRestfulService restfulService; private RestfulServiceConnection conn; private boolean started = false; private Context context; public ServiceManager(Context context) { this.context = context; } public void startService() { if (started) { Toast.makeText(context, "Service already started", Toast.LENGTH_SHORT).show(); } else { Intent i = new Intent(); i.setClassName("com.something.android", "com.something.android.RestfulAPIService"); context.startService(i); started = true; } } public void stopService() { if (!started) { Toast.makeText(context, "Service not yet started", Toast.LENGTH_SHORT).show(); } else { Intent i = new Intent(); i.setClassName("com.something.android", "com.something.android.RestfulAPIService"); context.stopService(i); started = false; } } public void bindService() { if (conn == null) { conn = new RestfulServiceConnection(); Intent i = new Intent(); i.setClassName("com.something.android", "com.something.android.RestfulAPIService"); context.bindService(i, conn, Context.BIND_AUTO_CREATE); } else { Toast.makeText(context, "Cannot bind - service already bound", Toast.LENGTH_SHORT).show(); } } protected void destroy() { releaseService(); } private void releaseService() { if (conn != null) { context.unbindService(conn); conn = null; Log.d(LOG_TAG, "unbindService()"); } else { Toast.makeText(context, "Cannot unbind - service not bound", Toast.LENGTH_SHORT).show(); } } class RestfulServiceConnection implements ServiceConnection { public void onServiceConnected(ComponentName className, IBinder boundService) { restfulService = IRestfulService.Stub.asInterface((IBinder) boundService); try { restfulService.registerCallback(mCallback); } catch (RemoteException e) {} } public void onServiceDisconnected(ComponentName className) { restfulService = null; } }; private IRemoteServiceCallback mCallback = new IRemoteServiceCallback.Stub() { public void userLogIn(String result) throws RemoteException { mHandler.sendMessage(mHandler.obtainMessage(Config.ACTION_LOGIN, result)); } }; private Handler mHandler; public void setHandler(Handler handler) { mHandler = handler; } } Service init and bind: // this I'm calling on app onCreate servicemanager = new ServiceManager(this); servicemanager.startService(); servicemanager.bindService(); application = (ApplicationState)this.getApplication(); application.setServiceManager(servicemanager); service function call: // this lot i'm calling as required - in this example for login progressDialog = new ProgressDialog(Login.this); progressDialog.setMessage("Logging you in..."); progressDialog.show(); application = (ApplicationState) getApplication(); servicemanager = application.getServiceManager(); servicemanager.setHandler(mHandler); try { servicemanager.restfulService.doLogin(args[0], args[1]); } catch (RemoteException e) { e.printStackTrace(); } ...later in the same file... Handler mHandler = new Handler() { public void handleMessage(Message msg) { switch (msg.what) { case Config.ACTION_LOGIN: if (progressDialog.isShowing()) { progressDialog.dismiss(); } try { ...process login results... } } catch (JSONException e) { Log.e("JSON", "There was an error parsing the JSON", e); } break; default: super.handleMessage(msg); } } }; Any and all help is greatly appreciated and I'll even buy you a coffee or a beer if you fancy :D Martyn

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >