Search Results

Search found 31989 results on 1280 pages for 'get method'.

Page 212/1280 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Multi-threaded JOGL Problem

    - by moeabdol
    I'm writing a simple OpenGL application in Java that implements the Monte Carlo method for estimating the value of PI. The method is pretty easy. Simply, you draw a circle inside a unit square and then plot random points over the scene. Now, for each point that is inside the circle you increment the counter for in points. After determining for all the random points wither they are inside the circle or not you divide the number of in points over the total number of points you have plotted all multiplied by 4 to get an estimation of PI. It goes something like this PI = (inPoints / totalPoints) * 4. This is because mathematically the ratio of a circle's area to a square's area is PI/4, so when we multiply it by 4 we get PI. My problem doesn't lie in the algorithm itself; however, I'm having problems trying to plot the points as they are being generated instead of just plotting everything at once when the program finishes executing. I want to give the application a sense of real-time display where the user would see the points as they are being plotted. I'm a beginner at OpenGL and I'm pretty sure there is a multi-threading feature built into it. Non the less, I tried to manually create my own thread. Each worker thread plots one point at a time. Following is the psudo-code: /* this part of the code exists in display() method in MyCanvas.java which extends GLCanvas and implements GLEventListener */ // main loop for(int i = 0; i < number_of_points; i++){ RandomGenerator random = new RandomGenerator(); float x = random.nextFloat(); float y = random.nextFloat(); Thread pointThread = new Thread(new PointThread(x, y)); } gl.glFlush(); /* this part of the code exists in run() method in PointThread.java which implements Runnable */ void run(){ try{ gl.glPushMatrix(); gl.glBegin(GL2.GL_POINTS); if(pointIsIn) gl.glColor3f(1.0f, 0.0f, 0.0f); // red point else gl.glColor3f(0.0f, 0.0f, 1.0f); // blue point gl.glVertex3f(x, y, 0.0f); // coordinates gl.glEnd(); gl.glPopMatrix(); }catch(Exception e){ } } I'm not sure if my approach to solving this issue is correct. I hope you guys can help me out. Thanks.

    Read the article

  • Is it possible to tell a search engine not to index a specific section of an HTML page? [closed]

    - by Justin
    Possible Duplicate: Preventing robots from crawling specific part of a page I know you can use robots.txt to ignore entire pages or sections of your site, but is there a way to tell cralwers like the Googlebot to ignore specific sections of an HTML page? I found this blog post that discusses one method, but it appears only to work for the Google Search Appliance, not the Googlebot. Is there some method for at least Google for to do this?

    Read the article

  • Physics Engine [Collision Response, 2-dimensional] experts, help!! My stack is unstable!

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they hardly stand still! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have: I would try adding a damping term (proportional to velocity) to the Baumgarte. Is this a good idea in general? If not I would not want to waste my time trying to tune the parameter hoping it magically works. Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :)

    Read the article

  • Finding "Stuff" In OUM

    - by Dave Burke
    One of the first questions people asked when they start using the Oracle Unified Method (OUM) is “how do I find X ?” Well of course no one is really looking for “X”!! but typically an OUM user might know the Task ID, or part of the Task Name, or maybe they just want to find out if there is any content within OUM that is related to a couple of keys words they have in their mind. Here are three quick tips I give people: 1. Open up one of the OUM Views, then click “Expand All”, and then use your Browser’s search function to locate a key Word. For example, Google Chrome or Internet Explorer: <CTRL> F, then type in a key Word, i.e. Architecture This is fast and easy option to use, but it only searches the current OUM page 2. Use the PDF view of OUM Open up one of the OUM Views, and then click the PDF View button located at the top of the View. Depending on your Browser’s settings, the PDF file will either open up in a new Window, or be saved to your local machine. In either case, once the PDF file is open, you can use the built in PDF search commands to search for key words across a large portion of the OUM Method Pack. This is great option for searching the entire Full Method View of OUM, including linked HTML pages, however the search will not included linked Documents, i.e. Word, Excel. 3. Use your operating systems file index to search for key words This is my favorite option, and one I use virtually every day. I happen to use Windows Search, but you could also use Google Desktop Search, of Finder on a MAC. All you need to do (on a Windows machine) is to make sure your local OUM folder structure is included in the Windows Index. Go to Control Panel, select Indexing Options, and ensure your OUM folder is included in the Index, i.e. C:/METHOD/OM40/OUM_5.6 Once your OUM folders are indexed, just open up Windows Search (or Google Desktop Search) and type in your key worlds, i.e. Unit Testing The reason I use this option the most is because the Search will take place across the entire content of the Indexed folders, included linked files. Happy searching!

    Read the article

  • Where should you store variables for a search program in java?

    - by Bored915
    I'm wondering which is a more effective storage method in java. Would be better to save variables that will not change in a class or a resource? They're going to be variables that contain a set amount so that later a search program will go through the list of variables so as to recognize possible options. Also if there is a more efficient method of storing them please say so or if i doesn't matter where i store them.

    Read the article

  • What's New in ASP.NET 4

    - by Navaneeth
    The .NET Framework version 4 includes enhancements for ASP.NET 4 in targeted areas. Visual Studio 2010 and Microsoft Visual Web Developer Express also include enhancements and new features for improved Web development. This document provides an overview of many of the new features that are included in the upcoming release. This topic contains the following sections: ASP.NET Core Services ASP.NET Web Forms ASP.NET MVC Dynamic Data ASP.NET Chart Control Visual Web Developer Enhancements Web Application Deployment with Visual Studio 2010 Enhancements to ASP.NET Multi-Targeting ASP.NET Core Services ASP.NET 4 introduces many features that improve core ASP.NET services such as output caching and session state storage. Extensible Output Caching Since the time that ASP.NET 1.0 was released, output caching has enabled developers to store the generated output of pages, controls, and HTTP responses in memory. On subsequent Web requests, ASP.NET can serve content more quickly by retrieving the generated output from memory instead of regenerating the output from scratch. However, this approach has a limitation — generated content always has to be stored in memory. On servers that experience heavy traffic, the memory requirements for output caching can compete with memory requirements for other parts of a Web application. ASP.NET 4 adds extensibility to output caching that enables you to configure one or more custom output-cache providers. Output-cache providers can use any storage mechanism to persist HTML content. These storage options can include local or remote disks, cloud storage, and distributed cache engines. Output-cache provider extensibility in ASP.NET 4 lets you design more aggressive and more intelligent output-caching strategies for Web sites. For example, you can create an output-cache provider that caches the "Top 10" pages of a site in memory, while caching pages that get lower traffic on disk. Alternatively, you can cache every vary-by combination for a rendered page, but use a distributed cache so that the memory consumption is offloaded from front-end Web servers. You create a custom output-cache provider as a class that derives from the OutputCacheProvider type. You can then configure the provider in the Web.config file by using the new providers subsection of the outputCache element For more information and for examples that show how to configure the output cache, see outputCache Element for caching (ASP.NET Settings Schema). For more information about the classes that support caching, see the documentation for the OutputCache and OutputCacheProvider classes. By default, in ASP.NET 4, all HTTP responses, rendered pages, and controls use the in-memory output cache. The defaultProvider attribute for ASP.NET is AspNetInternalProvider. You can change the default output-cache provider used for a Web application by specifying a different provider name for defaultProvider attribute. In addition, you can select different output-cache providers for individual control and for individual requests and programmatically specify which provider to use. For more information, see the HttpApplication.GetOutputCacheProviderName(HttpContext) method. The easiest way to choose a different output-cache provider for different Web user controls is to do so declaratively by using the new providerName attribute in a page or control directive, as shown in the following example: <%@ OutputCache Duration="60" VaryByParam="None" providerName="DiskCache" %> Preloading Web Applications Some Web applications must load large amounts of data or must perform expensive initialization processing before serving the first request. In earlier versions of ASP.NET, for these situations you had to devise custom approaches to "wake up" an ASP.NET application and then run initialization code during the Application_Load method in the Global.asax file. To address this scenario, a new application preload manager (autostart feature) is available when ASP.NET 4 runs on IIS 7.5 on Windows Server 2008 R2. The preload feature provides a controlled approach for starting up an application pool, initializing an ASP.NET application, and then accepting HTTP requests. It lets you perform expensive application initialization prior to processing the first HTTP request. For example, you can use the application preload manager to initialize an application and then signal a load-balancer that the application was initialized and ready to accept HTTP traffic. To use the application preload manager, an IIS administrator sets an application pool in IIS 7.5 to be automatically started by using the following configuration in the applicationHost.config file: <applicationPools> <add name="MyApplicationPool" startMode="AlwaysRunning" /> </applicationPools> Because a single application pool can contain multiple applications, you specify individual applications to be automatically started by using the following configuration in the applicationHost.config file: <sites> <site name="MySite" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PrewarmMyCache" > <!-- Additional content --> </application> </site> </sites> <!-- Additional content --> <serviceAutoStartProviders> <add name="PrewarmMyCache" type="MyNamespace.CustomInitialization, MyLibrary" /> </serviceAutoStartProviders> When an IIS 7.5 server is cold-started or when an individual application pool is recycled, IIS 7.5 uses the information in the applicationHost.config file to determine which Web applications have to be automatically started. For each application that is marked for preload, IIS7.5 sends a request to ASP.NET 4 to start the application in a state during which the application temporarily does not accept HTTP requests. When it is in this state, ASP.NET instantiates the type defined by the serviceAutoStartProvider attribute (as shown in the previous example) and calls into its public entry point. You create a managed preload type that has the required entry point by implementing the IProcessHostPreloadClient interface, as shown in the following example: public class CustomInitialization : System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // Perform initialization. } } After your initialization code runs in the Preload method and after the method returns, the ASP.NET application is ready to process requests. Permanently Redirecting a Page Content in Web applications is often moved over the lifetime of the application. This can lead to links to be out of date, such as the links that are returned by search engines. In ASP.NET, developers have traditionally handled requests to old URLs by using the Redirect method to forward a request to the new URL. However, the Redirect method issues an HTTP 302 (Found) response (which is used for a temporary redirect). This results in an extra HTTP round trip. ASP.NET 4 adds a RedirectPermanent helper method that makes it easy to issue HTTP 301 (Moved Permanently) responses, as in the following example: RedirectPermanent("/newpath/foroldcontent.aspx"); Search engines and other user agents that recognize permanent redirects will store the new URL that is associated with the content, which eliminates the unnecessary round trip made by the browser for temporary redirects. Session State Compression By default, ASP.NET provides two options for storing session state across a Web farm. The first option is a session state provider that invokes an out-of-process session state server. The second option is a session state provider that stores data in a Microsoft SQL Server database. Because both options store state information outside a Web application's worker process, session state has to be serialized before it is sent to remote storage. If a large amount of data is saved in session state, the size of the serialized data can become very large. ASP.NET 4 introduces a new compression option for both kinds of out-of-process session state providers. By using this option, applications that have spare CPU cycles on Web servers can achieve substantial reductions in the size of serialized session state data. You can set this option using the new compressionEnabled attribute of the sessionState element in the configuration file. When the compressionEnabled configuration option is set to true, ASP.NET compresses (and decompresses) serialized session state by using the .NET Framework GZipStreamclass. The following example shows how to set this attribute. <sessionState mode="SqlServer" sqlConnectionString="data source=dbserver;Initial Catalog=aspnetstate" allowCustomSqlDatabase="true" compressionEnabled="true" /> ASP.NET Web Forms Web Forms has been a core feature in ASP.NET since the release of ASP.NET 1.0. Many enhancements have been in this area for ASP.NET 4, such as the following: The ability to set meta tags. More control over view state. Support for recently introduced browsers and devices. Easier ways to work with browser capabilities. Support for using ASP.NET routing with Web Forms. More control over generated IDs. The ability to persist selected rows in data controls. More control over rendered HTML in the FormView and ListView controls. Filtering support for data source controls. Enhanced support for Web standards and accessibility Setting Meta Tags with the Page.MetaKeywords and Page.MetaDescription Properties Two properties have been added to the Page class: MetaKeywords and MetaDescription. These two properties represent corresponding meta tags in the HTML rendered for a page, as shown in the following example: <head id="Head1" runat="server"> <title>Untitled Page</title> <meta name="keywords" content="keyword1, keyword2' /> <meta name="description" content="Description of my page" /> </head> These two properties work like the Title property does, and they can be set in the @ Page directive. For more information, see Page.MetaKeywords and Page.MetaDescription. Enabling View State for Individual Controls A new property has been added to the Control class: ViewStateMode. You can use this property to disable view state for all controls on a page except those for which you explicitly enable view state. View state data is included in a page's HTML and increases the amount of time it takes to send a page to the client and post it back. Storing more view state than is necessary can cause significant decrease in performance. In earlier versions of ASP.NET, you could reduce the impact of view state on a page's performance by disabling view state for specific controls. But sometimes it is easier to enable view state for a few controls that need it instead of disabling it for many that do not need it. For more information, see Control.ViewStateMode. Support for Recently Introduced Browsers and Devices ASP.NET includes a feature that is named browser capabilities that lets you determine the capabilities of the browser that a user is using. Browser capabilities are represented by the HttpBrowserCapabilities object which is stored in the HttpRequest.Browser property. Information about a particular browser's capabilities is defined by a browser definition file. In ASP.NET 4, these browser definition files have been updated to contain information about recently introduced browsers and devices such as Google Chrome, Research in Motion BlackBerry smart phones, and Apple iPhone. Existing browser definition files have also been updated. For more information, see How to: Upgrade an ASP.NET Web Application to ASP.NET 4 and ASP.NET Web Server Controls and Browser Capabilities. The browser definition files that are included with ASP.NET 4 are shown in the following list: •blackberry.browser •chrome.browser •Default.browser •firefox.browser •gateway.browser •generic.browser •ie.browser •iemobile.browser •iphone.browser •opera.browser •safari.browser A New Way to Define Browser Capabilities ASP.NET 4 includes a new feature referred to as browser capabilities providers. As the name suggests, this lets you build a provider that in turn lets you write custom code to determine browser capabilities. In ASP.NET version 3.5 Service Pack 1, you define browser capabilities in an XML file. This file resides in a machine-level folder or an application-level folder. Most developers do not need to customize these files, but for those who do, the provider approach can be easier than dealing with complex XML syntax. The provider approach makes it possible to simplify the process by implementing a common browser definition syntax, or a database that contains up-to-date browser definitions, or even a Web service for such a database. For more information about the new browser capabilities provider, see the What's New for ASP.NET 4 White Paper. Routing in ASP.NET 4 ASP.NET 4 adds built-in support for routing with Web Forms. Routing is a feature that was introduced with ASP.NET 3.5 SP1 and lets you configure an application to use URLs that are meaningful to users and to search engines because they do not have to specify physical file names. This can make your site more user-friendly and your site content more discoverable by search engines. For example, the URL for a page that displays product categories in your application might look like the following example: http://website/products.aspx?categoryid=12 By using routing, you can use the following URL to render the same information: http://website/products/software The second URL lets the user know what to expect and can result in significantly improved rankings in search engine results. the new features include the following: The PageRouteHandler class is a simple HTTP handler that you use when you define routes. You no longer have to write a custom route handler. The HttpRequest.RequestContext and Page.RouteData properties make it easier to access information that is passed in URL parameters. The RouteUrl expression provides a simple way to create a routed URL in markup. The RouteValue expression provides a simple way to extract URL parameter values in markup. The RouteParameter class makes it easier to pass URL parameter values to a query for a data source control (similar to FormParameter). You no longer have to change the Web.config file to enable routing. For more information about routing, see the following topics: ASP.NET Routing Walkthrough: Using ASP.NET Routing in a Web Forms Application How to: Define Routes for Web Forms Applications How to: Construct URLs from Routes How to: Access URL Parameters in a Routed Page Setting Client IDs The new ClientIDMode property makes it easier to write client script that references HTML elements rendered for server controls. Increasing use of Microsoft Ajax makes the need to do this more common. For example, you may have a data control that renders a long list of products with prices and you want to use client script to make a Web service call and update individual prices in the list as they change without refreshing the entire page. Typically you get a reference to an HTML element in client script by using the document.GetElementById method. You pass to this method the value of the id attribute of the HTML element you want to reference. In the case of elements that are rendered for ASP.NET server controls earlier versions of ASP.NET could make this difficult or impossible. You were not always able to predict what id values ASP.NET would generate, or ASP.NET could generate very long id values. The problem was especially difficult for data controls that would generate multiple rows for a single instance of the control in your markup. ASP.NET 4 adds two new algorithms for generating id attributes. These algorithms can generate id attributes that are easier to work with in client script because they are more predictable and that are easier to work with because they are simpler. For more information about how to use the new algorithms, see the following topics: ASP.NET Web Server Control Identification Walkthrough: Making Data-Bound Controls Easier to Access from JavaScript Walkthrough: Making Controls Located in Web User Controls Easier to Access from JavaScript How to: Access Controls from JavaScript by ID Persisting Row Selection in Data Controls The GridView and ListView controls enable users to select a row. In previous versions of ASP.NET, row selection was based on the row index on the page. For example, if you select the third item on page 1 and then move to page 2, the third item on page 2 is selected. In most cases, is more desirable not to select any rows on page 2. ASP.NET 4 supports Persisted Selection, a new feature that was initially supported only in Dynamic Data projects in the .NET Framework 3.5 SP1. When this feature is enabled, the selected item is based on the row data key. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. This is a much more natural behavior than the behavior in earlier versions of ASP.NET. Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature in the GridView control, for example, by setting the EnablePersistedSelection property, as shown in the following example: <asp:GridView id="GridView2" runat="server" PersistedSelection="true"> </asp:GridView> FormView Control Enhancements The FormView control is enhanced to make it easier to style the content of the control with CSS. In previous versions of ASP.NET, the FormView control rendered it contents using an item template. This made styling more difficult in the markup because unexpected table row and table cell tags were rendered by the control. The FormView control supports RenderOuterTable, a property in ASP.NET 4. When this property is set to false, as show in the following example, the table tags are not rendered. This makes it easier to apply CSS style to the contents of the control. <asp:FormView ID="FormView1" runat="server" RenderTable="false"> For more information, see FormView Web Server Control Overview. ListView Control Enhancements The ListView control, which was introduced in ASP.NET 3.5, has all the functionality of the GridView control while giving you complete control over the output. This control has been made easier to use in ASP.NET 4. The earlier version of the control required that you specify a layout template that contained a server control with a known ID. The following markup shows a typical example of how to use the ListView control in ASP.NET 3.5. <asp:ListView ID="ListView1" runat="server"> <LayoutTemplate> <asp:PlaceHolder ID="ItemPlaceHolder" runat="server"></asp:PlaceHolder> </LayoutTemplate> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> In ASP.NET 4, the ListView control does not require a layout template. The markup shown in the previous example can be replaced with the following markup: <asp:ListView ID="ListView1" runat="server"> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> For more information, see ListView Web Server Control Overview. Filtering Data with the QueryExtender Control A very common task for developers who create data-driven Web pages is to filter data. This traditionally has been performed by building Where clauses in data source controls. This approach can be complicated, and in some cases the Where syntax does not let you take advantage of the full functionality of the underlying database. To make filtering easier, a new QueryExtender control has been added in ASP.NET 4. This control can be added to EntityDataSource or LinqDataSource controls in order to filter the data returned by these controls. Because the QueryExtender control relies on LINQ, but you do not to need to know how to write LINQ queries to use the query extender. The QueryExtender control supports a variety of filter options. The following lists QueryExtender filter options. Term Definition SearchExpression Searches a field or fields for string values and compares them to a specified string value. RangeExpression Searches a field or fields for values in a range specified by a pair of values. PropertyExpression Compares a specified value to a property value in a field. If the expression evaluates to true, the data that is being examined is returned. OrderByExpression Sorts data by a specified column and sort direction. CustomExpression Calls a function that defines custom filter in the page. For more information, see QueryExtenderQueryExtender Web Server Control Overview. Enhanced Support for Web Standards and Accessibility Earlier versions of ASP.NET controls sometimes render markup that does not conform to HTML, XHTML, or accessibility standards. ASP.NET 4 eliminates most of these exceptions. For details about how the HTML that is rendered by each control meets accessibility standards, see ASP.NET Controls and Accessibility. CSS for Controls that Can be Disabled In ASP.NET 3.5, when a control is disabled (see WebControl.Enabled), a disabled attribute is added to the rendered HTML element. For example, the following markup creates a Label control that is disabled: <asp:Label id="Label1" runat="server"   Text="Test" Enabled="false" /> In ASP.NET 3.5, the previous control settings generate the following HTML: <span id="Label1" disabled="disabled">Test</span> In HTML 4.01, the disabled attribute is not considered valid on span elements. It is valid only on input elements because it specifies that they cannot be accessed. On display-only elements such as span elements, browsers typically support rendering for a disabled appearance, but a Web page that relies on this non-standard behavior is not robust according to accessibility standards. For display-only elements, you should use CSS to indicate a disabled visual appearance. Therefore, by default ASP.NET 4 generates the following HTML for the control settings shown previously: <span id="Label1" class="aspNetDisabled">Test</span> You can change the value of the class attribute that is rendered by default when a control is disabled by setting the DisabledCssClass property. CSS for Validation Controls In ASP.NET 3.5, validation controls render a default color of red as an inline style. For example, the following markup creates a RequiredFieldValidator control: <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server"   ErrorMessage="Required Field" ControlToValidate="RadioButtonList1" /> ASP.NET 3.5 renders the following HTML for the validator control: <span id="RequiredFieldValidator1"   style="color:Red;visibility:hidden;">RequiredFieldValidator</span> By default, ASP.NET 4 does not render an inline style to set the color to red. An inline style is used only to hide or show the validator, as shown in the following example: <span id="RequiredFieldValidator1"   style"visibility:hidden;">RequiredFieldValidator</span> Therefore, ASP.NET 4 does not automatically show error messages in red. For information about how to use CSS to specify a visual style for a validation control, see Validating User Input in ASP.NET Web Pages. CSS for the Hidden Fields Div Element ASP.NET uses hidden fields to store state information such as view state and control state. These hidden fields are contained by a div element. In ASP.NET 3.5, this div element does not have a class attribute or an id attribute. Therefore, CSS rules that affect all div elements could unintentionally cause this div to be visible. To avoid this problem, ASP.NET 4 renders the div element for hidden fields with a CSS class that you can use to differentiate the hidden fields div from others. The new classvalue is shown in the following example: <div class="aspNetHidden"> CSS for the Table, Image, and ImageButton Controls By default, in ASP.NET 3.5, some controls set the border attribute of rendered HTML to zero (0). The following example shows HTML that is generated by the Table control in ASP.NET 3.5: <table id="Table2" border="0"> The Image control and the ImageButton control also do this. Because this is not necessary and provides visual formatting information that should be provided by using CSS, the attribute is not generated in ASP.NET 4. CSS for the UpdatePanel and UpdateProgress Controls In ASP.NET 3.5, the UpdatePanel and UpdateProgress controls do not support expando attributes. This makes it impossible to set a CSS class on the HTMLelements that they render. In ASP.NET 4 these controls have been changed to accept expando attributes, as shown in the following example: <asp:UpdatePanel runat="server" class="myStyle"> </asp:UpdatePanel> The following HTML is rendered for this markup: <div id="ctl00_MainContent_UpdatePanel1" class="expandoclass"> </div> Eliminating Unnecessary Outer Tables In ASP.NET 3.5, the HTML that is rendered for the following controls is wrapped in a table element whose purpose is to apply inline styles to the entire control: FormView Login PasswordRecovery ChangePassword If you use templates to customize the appearance of these controls, you can specify CSS styles in the markup that you provide in the templates. In that case, no extra outer table is required. In ASP.NET 4, you can prevent the table from being rendered by setting the new RenderOuterTable property to false. Layout Templates for Wizard Controls In ASP.NET 3.5, the Wizard and CreateUserWizard controls generate an HTML table element that is used for visual formatting. In ASP.NET 4 you can use a LayoutTemplate element to specify the layout. If you do this, the HTML table element is not generated. In the template, you create placeholder controls to indicate where items should be dynamically inserted into the control. (This is similar to how the template model for the ListView control works.) For more information, see the Wizard.LayoutTemplate property. New HTML Formatting Options for the CheckBoxList and RadioButtonList Controls ASP.NET 3.5 uses HTML table elements to format the output for the CheckBoxList and RadioButtonList controls. To provide an alternative that does not use tables for visual formatting, ASP.NET 4 adds two new options to the RepeatLayout enumeration: UnorderedList. This option causes the HTML output to be formatted by using ul and li elements instead of a table. OrderedList. This option causes the HTML output to be formatted by using ol and li elements instead of a table. For examples of HTML that is rendered for the new options, see the RepeatLayout enumeration. Header and Footer Elements for the Table Control In ASP.NET 3.5, the Table control can be configured to render thead and tfoot elements by setting the TableSection property of the TableHeaderRow class and the TableFooterRow class. In ASP.NET 4 these properties are set to the appropriate values by default. CSS and ARIA Support for the Menu Control In ASP.NET 3.5, the Menu control uses HTML table elements for visual formatting, and in some configurations it is not keyboard-accessible. ASP.NET 4 addresses these problems and improves accessibility in the following ways: The generated HTML is structured as an unordered list (ul and li elements). CSS is used for visual formatting. The menu behaves in accordance with ARIA standards for keyboard access. You can use arrow keys to navigate menu items. (For information about ARIA, see Accessibility in Visual Studio and ASP.NET.) ARIA role and property attributes are added to the generated HTML. (Attributes are added by using JavaScript instead of included in the HTML, to avoid generating HTML that would cause markup validation errors.) Styles for the Menu control are rendered in a style block at the top of the page, instead of inline with the rendered HTML elements. If you want to use a separate CSS file so that you can modify the menu styles, you can set the Menu control's new IncludeStyleBlock property to false, in which case the style block is not generated. Valid XHTML for the HtmlForm Control In ASP.NET 3.5, the HtmlForm control (which is created implicitly by the <form runat="server"> tag) renders an HTML form element that has both name and id attributes. The name attribute is deprecated in XHTML 1.1. Therefore, this control does not render the name attribute in ASP.NET 4. Maintaining Backward Compatibility in Control Rendering An existing ASP.NET Web site might have code in it that assumes that controls are rendering HTML the way they do in ASP.NET 3.5. To avoid causing backward compatibility problems when you upgrade the site to ASP.NET 4, you can have ASP.NET continue to generate HTML the way it does in ASP.NET 3.5 after you upgrade the site. To do so, you can set the controlRenderingCompatibilityVersion attribute of the pages element to "3.5" in the Web.config file of an ASP.NET 4 Web site, as shown in the following example: <system.web>   <pages controlRenderingCompatibilityVersion="3.5"/> </system.web> If this setting is omitted, the default value is the same as the version of ASP.NET that the Web site targets. (For information about multi-targeting in ASP.NET, see .NET Framework Multi-Targeting for ASP.NET Web Projects.) ASP.NET MVC ASP.NET MVC helps Web developers build compelling standards-based Web sites that are easy to maintain because it decreases the dependency among application layers by using the Model-View-Controller (MVC) pattern. MVC provides complete control over the page markup. It also improves testability by inherently supporting Test Driven Development (TDD). Web sites created using ASP.NET MVC have a modular architecture. This allows members of a team to work independently on the various modules and can be used to improve collaboration. For example, developers can work on the model and controller layers (data and logic), while the designer work on the view (presentation). For tutorials, walkthroughs, conceptual content, code samples, and a complete API reference, see ASP.NET MVC 2. Dynamic Data Dynamic Data was introduced in the .NET Framework 3.5 SP1 release in mid-2008. This feature provides many enhancements for creating data-driven applications, such as the following: A RAD experience for quickly building a data-driven Web site. Automatic validation that is based on constraints defined in the data model. The ability to easily change the markup that is generated for fields in the GridView and DetailsView controls by using field templates that are part of your Dynamic Data project. For ASP.NET 4, Dynamic Data has been enhanced to give developers even more power for quickly building data-driven Web sites. For more information, see ASP.NET Dynamic Data Content Map. Enabling Dynamic Data for Individual Data-Bound Controls in Existing Web Applications You can use Dynamic Data features in existing ASP.NET Web applications that do not use scaffolding by enabling Dynamic Data for individual data-bound controls. Dynamic Data provides the presentation and data layer support for rendering these controls. When you enable Dynamic Data for data-bound controls, you get the following benefits: Setting default values for data fields. Dynamic Data enables you to provide default values at run time for fields in a data control. Interacting with the database without creating and registering a data model. Automatically validating the data that is entered by the user without writing any code. For more information, see Walkthrough: Enabling Dynamic Data in ASP.NET Data-Bound Controls. New Field Templates for URLs and E-mail Addresses ASP.NET 4 introduces two new built-in field templates, EmailAddress.ascx and Url.ascx. These templates are used for fields that are marked as EmailAddress or Url using the DataTypeAttribute attribute. For EmailAddress objects, the field is displayed as a hyperlink that is created by using the mailto: protocol. When users click the link, it opens the user's e-mail client and creates a skeleton message. Objects typed as Url are displayed as ordinary hyperlinks. The following example shows how to mark fields. [DataType(DataType.EmailAddress)] public object HomeEmail { get; set; } [DataType(DataType.Url)] public object Website { get; set; } Creating Links with the DynamicHyperLink Control Dynamic Data uses the new routing feature that was added in the .NET Framework 3.5 SP1 to control the URLs that users see when they access the Web site. The new DynamicHyperLink control makes it easy to build links to pages in a Dynamic Data site. For information, see How to: Create Table Action Links in Dynamic Data Support for Inheritance in the Data Model Both the ADO.NET Entity Framework and LINQ to SQL support inheritance in their data models. An example of this might be a database that has an InsurancePolicy table. It might also contain CarPolicy and HousePolicy tables that have the same fields as InsurancePolicy and then add more fields. Dynamic Data has been modified to understand inherited objects in the data model and to support scaffolding for the inherited tables. For more information, see Walkthrough: Mapping Table-per-Hierarchy Inheritance in Dynamic Data. Support for Many-to-Many Relationships (Entity Framework Only) The Entity Framework has rich support for many-to-many relationships between tables, which is implemented by exposing the relationship as a collection on an Entity object. New field templates (ManyToMany.ascx and ManyToMany_Edit.ascx) have been added to provide support for displaying and editing data that is involved in many-to-many relationships. For more information, see Working with Many-to-Many Data Relationships in Dynamic Data. New Attributes to Control Display and Support Enumerations The DisplayAttribute has been added to give you additional control over how fields are displayed. The DisplayNameAttribute attribute in earlier versions of Dynamic Data enabled you to change the name that is used as a caption for a field. The new DisplayAttribute class lets you specify more options for displaying a field, such as the order in which a field is displayed and whether a field will be used as a filter. The attribute also provides independent control of the name that is used for the labels in a GridView control, the name that is used in a DetailsView control, the help text for the field, and the watermark used for the field (if the field accepts text input). The EnumDataTypeAttribute class has been added to let you map fields to enumerations. When you apply this attribute to a field, you specify an enumeration type. Dynamic Data uses the new Enumeration.ascx field template to create UI for displaying and editing enumeration values. The template maps the values from the database to the names in the enumeration. Enhanced Support for Filters Dynamic Data 1.0 had built-in filters for Boolean columns and foreign-key columns. The filters did not let you specify the order in which they were displayed. The new DisplayAttribute attribute addresses this by giving you control over whether a column appears as a filter and in what order it will be displayed. An additional enhancement is that filtering support has been rewritten to use the new QueryExtender feature of Web Forms. This lets you create filters without requiring knowledge of the data source control that the filters will be used with. Along with these extensions, filters have also been turned into template controls, which lets you add new ones. Finally, the DisplayAttribute class mentioned earlier allows the default filter to be overridden, in the same way that UIHint allows the default field template for a column to be overridden. For more information, see Walkthrough: Filtering Rows in Tables That Have a Parent-Child Relationship and QueryableFilterRepeater. ASP.NET Chart Control The ASP.NET chart server control enables you to create ASP.NET pages applications that have simple, intuitive charts for complex statistical or financial analysis. The chart control supports the following features: Data series, chart areas, axes, legends, labels, titles, and more. Data binding. Data manipulation, such as copying, splitting, merging, alignment, grouping, sorting, searching, and filtering. Statistical formulas and financial formulas. Advanced chart appearance, such as 3-D, anti-aliasing, lighting, and perspective. Events and customizations. Interactivity and Microsoft Ajax. Support for the Ajax Content Delivery Network (CDN), which provides an optimized way for you to add Microsoft Ajax Library and jQuery scripts to your Web applications. For more information, see Chart Web Server Control Overview. Visual Web Developer Enhancements The following sections provide information about enhancements and new features in Visual Studio 2010 and Visual Web Developer Express. The Web page designer in Visual Studio 2010 has been enhanced for better CSS compatibility, includes additional support for HTML and ASP.NET markup snippets, and features a redesigned version of IntelliSense for JScript. Improved CSS Compatibility The Visual Web Developer designer in Visual Studio 2010 has been updated to improve CSS 2.1 standards compliance. The designer better preserves HTML source code and is more robust than in previous versions of Visual Studio. HTML and JScript Snippets In the HTML editor, IntelliSense auto-completes tag names. The IntelliSense Snippets feature auto-completes whole tags and more. In Visual Studio 2010, IntelliSense snippets are supported for JScript, alongside C# and Visual Basic, which were supported in earlier versions of Visual Studio. Visual Studio 2010 includes over 200 snippets that help you auto-complete common ASP.NET and HTML tags, including required attributes (such as runat="server") and common attributes specific to a tag (such as ID, DataSourceID, ControlToValidate, and Text). You can download additional snippets, or you can write your own snippets that encapsulate the blocks of markup that you or your team use for common tasks. For more information on HTML snippets, see Walkthrough: Using HTML Snippets. JScript IntelliSense Enhancements In Visual 2010, JScript IntelliSense has been redesigned to provide an even richer editing experience. IntelliSense now recognizes objects that have been dynamically generated by methods such as registerNamespace and by similar techniques used by other JavaScript frameworks. Performance has been improved to analyze large libraries of script and to display IntelliSense with little or no processing delay. Compatibility has been significantly increased to support almost all third-party libraries and to support diverse coding styles. Documentation comments are now parsed as you type and are immediately leveraged by IntelliSense. Web Application Deployment with Visual Studio 2010 For Web application projects, Visual Studio now provides tools that work with the IIS Web Deployment Tool (Web Deploy) to automate many processes that had to be done manually in earlier versions of ASP.NET. For example, the following tasks can now be automated: Creating an IIS application on the destination computer and configuring IIS settings. Copying files to the destination computer. Changing Web.config settings that must be different in the destination environment. Propagating changes to data or data structures in SQL Server databases that are used by the Web application. For more information about Web application deployment, see ASP.NET Deployment Content Map. Enhancements to ASP.NET Multi-Targeting ASP.NET 4 adds new features to the multi-targeting feature to make it easier to work with projects that target earlier versions of the .NET Framework. Multi-targeting was introduced in ASP.NET 3.5 to enable you to use the latest version of Visual Studio without having to upgrade existing Web sites or Web services to the latest version of the .NET Framework. In Visual Studio 2008, when you work with a project targeted for an earlier version of the .NET Framework, most features of the development environment adapt to the targeted version. However, IntelliSense displays language features that are available in the current version, and property windows display properties available in the current version. In Visual Studio 2010, only language features and properties available in the targeted version of the .NET Framework are shown. For more information about multi-targeting, see the following topics: .NET Framework Multi-Targeting for ASP.NET Web Projects ASP.NET Side-by-Side Execution Overview How to: Host Web Applications That Use Different Versions of the .NET Framework on the Same Server How to: Deploy Web Site Projects Targeted for Earlier Versions of the .NET Framework

    Read the article

  • Fix ACPI DSDT of Amilo Pa 1538

    - by kayahr
    I have a Fujitsu-Siemens Amilo Pa 1538 laptops and installed Ubuntu 10.04 on it (Kernel 2.6.32). Main problem is that the fans of the notebook are always on even when the temperature is low. Another problem is that the display brightness can not be adjusted. Since some years I use Dell laptops and never experienced any ACPI problems so I thought that it is no longer needed to fix ACPI tables but now I have this crap of a laptop and I think I have to do it. Unfortunately I need some help repairing the DSDT. The dsdt.dat and dsdt.dsl of the laptop can be found here: http://www.ailis.de/~k/permdata/20100420/dsdt/ Compiling the DSDT gives the following output: # iasl -sa dsdt.dsl Intel ACPI Component Architecture ASL Optimizing Compiler version 20090521 [Jun 30 2009] Copyright (C) 2000 - 2009 Intel Corporation Supports ACPI Specification Revision 3.0a dsdt.dsl 81: Method (\_WAK, 1, NotSerialized) Warning 1080 - ^ Reserved method must return a value (_WAK) dsdt.dsl 207: Method (_L10, 0, NotSerialized) Warning 1087 - ^ Not all control paths return a value (_L10) dsdt.dsl 2861: Method (NVIF, 3, NotSerialized) Warning 1087 - ^ Not all control paths return a value (NVIF) dsdt.dsl 4551: Store (\_SB.PCI0.LPC0.PMRD (0xFA), Local0) Warning 1099 - Statement is unreachable ^ ASL Input: dsdt.dsl - 4962 lines, 162828 bytes, 2300 keywords AML Output: dsdt.aml - 17627 bytes, 591 named objects, 1709 executable opcodes Compilation complete. 0 Errors, 4 Warnings, 0 Remarks, 519 Optimizations Anyone here with DSDT experience who can help me fixing the DSDT?

    Read the article

  • SSH hangs without password prompt

    - by Wilco
    Just reinstalled OS X and for some reason I now cannot connect to a specific machine on my local network via SSH. I can SSH to other machines on the network without any problems, and other machines can SSH to the problematic one as well. I'm not sure where to start looking for problems - can anyone point me in the right direction? Here's a dump of a connection attempt: OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to 10.0.1.7 [10.0.1.7] port 22. debug1: Connection established. debug1: identity file /Users/nwilliams/.ssh/identity type -1 debug1: identity file /Users/nwilliams/.ssh/id_rsa type -1 debug1: identity file /Users/nwilliams/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.5 debug1: match: OpenSSH_4.5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '10.0.1.7' is known and matches the RSA host key. debug1: Found key in /Users/nwilliams/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic ... at this point it hangs for quite a while, and then resumes ... debug1: Unspecified GSS failure. Minor code may provide more information Server not found in Kerberos database debug1: Unspecified GSS failure. Minor code may provide more information Server not found in Kerberos database debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Trying private key: /Users/nwilliams/.ssh/identity debug1: Trying private key: /Users/nwilliams/.ssh/id_rsa debug1: Trying private key: /Users/nwilliams/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive

    Read the article

  • Problems with X11GraphicsDevice on Suse 11

    - by Daniel
    Hi, On servers running Suse 11 I'm experiencing hangups in sun.awt.X11GraphicsDevice.getDoubleBufferVisuals(Native Method) when connecting via Citrix (and setting DISPLAY to localhost:11.0). Running exactly the same code in exactly the same environment, excepth through Exceed (with DISPLAY set to my workstation's IP) it runs like clockwork. The error is not intermittent, it happens every time Reinstalling the OS does not help Can not reproduce it on Suse 10 This is what the main thread stack looks like: [junit] "main" prio=10 tid=0x0000000040112000 nid=0x6acc runnable [0x00002b9f909ae000] [junit] java.lang.Thread.State: RUNNABLE [junit] at sun.awt.X11GraphicsDevice.getDoubleBufferVisuals(Native Method) [junit] at sun.awt.X11GraphicsDevice.makeDefaultConfiguration(X11GraphicsDevice.java:208) [junit] at sun.awt.X11GraphicsDevice.getDefaultConfiguration(X11GraphicsDevice.java:182) [junit] - locked <0x00002b9fed6b8e70 (a java.lang.Object) [junit] at sun.awt.X11.XToolkit.(XToolkit.java:92) [junit] at java.lang.Class.forName0(Native Method) [junit] at java.lang.Class.forName(Class.java:169) [junit] at java.awt.Toolkit$2.run(Toolkit.java:834) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:826) [junit] - locked <0x00002b9f94b8ada0 (a java.lang.Class for java.awt.Toolkit) [junit] at java.awt.Toolkit.getEventQueue(Toolkit.java:1676) [junit] at java.awt.EventQueue.invokeLater(EventQueue.java:954) [junit] at javax.swing.SwingUtilities.invokeLater(SwingUtilities.java:1264) ... Has anyone experienced something similar? Could this be a problem in Suse 11's display handling? I'm thankful for any input at this point - I'm fresh out of ideas :)

    Read the article

  • another "SSH connect to host github.com port 22: Bad file number"

    - by Mariusz
    Hello. I have a problem with my first-time ssh connection. Yes, I've already done yours guides, already tried your "Dealing with firewalls and proxies" article and the problem is still occuring. I am using Win7 32bit, Windows Firewall is disabled, haven't any third-party firewalls, ESET Nod32 Antivirus is not blocking any ports, I am not using any PROXY (neither local proxy) . Here goes the logs: Ordinary SSH connection try C:\Users\Mariusz>ssh -vvv [email protected] OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug2: ssh_connect: needpriv 0 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: connect to address 207.97.227.239 port 22: Not owner ssh: connect to host github.com port 22: Bad file number NCAT connection try C:\Users\Mariusz>ncat github.com 22 Strange connect error from 207.97.227.239 (10013): No error 10013 = WSAEACCES I think that method called "smart-http-support" won't be usable for me because I haven't created repo yet. I have just GIT INIT locally, and finished at step GIT PUSH, which returns the same: ssh: connect to host github.com port 22: Bad file number fatal: The remote end hung up unexpectedly corkscrew method (first article from yours guide) . While PUTTYing (with pageant in bg), after inputing login - an error is occuring (MessageBox): Disconnected: No supported authentication methods available And in terminal such message is printing out: Server refused our key Key I have generated correctly, using ssh-keygen. I tried not method by editing ~/.ssh/config yet because I had thought that because I haven't PUSHed anything to my remote repo so I won't be able to CLONE anything. Method called ssh-forwarding is not for my, cause it "requires access to an external ssh server" and I haven't any at this time. What else could I do? Thanks in advance for any help. Mariusz.

    Read the article

  • Start & shutdown as tomcat as non-root user

    - by user53864
    How to start, shutdown and restart tomcat as non-root user. I have tomcat5.x installed on ubuntu machine(usr/share/tomcat). The tomcat directory has full permissions. When I shutdown(/usr/share/tomcat/bin/shudown.sh) or startup(/usr/share/tomcat/startup.sh) tomcat as normal user, it starts & shuts down normally as if it is executed as root user but I could not access webpage even after starting tomcat as non-root user. user1@station2:/usr/share/tomcat/webapps$ ../bin/shutdown.sh Using CATALINA_BASE: /usr/share/tomcat Using CATALINA_HOME: /usr/share/tomcat Using CATALINA_TMPDIR: /usr/share/tomcat/temp Using JRE_HOME: /usr 4 Oct, 2010 6:41:11 PM org.apache.catalina.startup.Catalina stopServer SEVERE: Catalina.stop: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:310) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:176) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:163) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at java.net.Socket.connect(Socket.java:495) at java.net.Socket.<init>(Socket.java:392) at java.net.Socket.<init>(Socket.java:206) at org.apache.catalina.startup.Catalina.stopServer(Catalina.java:421) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.catalina.startup.Bootstrap.stopServer(Bootstrap.java:337) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:415)

    Read the article

  • Sent Item code in java

    - by Farhan Khan
    I need urgent help, if any one can resolve my issue it will be very highly appriciated. I have create a SMS composer on jAVA netbians its on urdu language. the probelm is its not saving sent sms on Sent items.. i have tried my best to make the code but failed. Tomorrow is my last day to present the code on university please help me please below is the code that i have made till now. Please any one.... /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package newSms; import javax.microedition.io.Connector; import javax.microedition.midlet.*; import javax.microedition.lcdui.*; import javax.wireless.messaging.MessageConnection; import javax.wireless.messaging.TextMessage; import org.netbeans.microedition.util.SimpleCancellableTask; /** * @author AHTISHAM */ public class composeurdu extends MIDlet implements CommandListener, ItemCommandListener, ItemStateListener { private boolean midletPaused = false; private boolean isUrdu; String numb=" "; Alert alert; //<editor-fold defaultstate="collapsed" desc=" Generated Fields ">//GEN-BEGIN:|fields|0| private Form form; private TextField number; private TextField textUrdu; private StringItem stringItem; private StringItem send; private Command exit; private Command sendMesg; private Command add; private Command urdu; private Command select; private SimpleCancellableTask task; //</editor-fold>//GEN-END:|fields|0| MessageConnection clientConn; private Display display; public composeurdu() { display = Display.getDisplay(this); } private void showMessage(){ display=Display.getDisplay(this); //numb=number.getString(); if(number.getString().length()==0 || textUrdu.getString().length()==0){ Alert alert=new Alert("error "); alert.setString(" Enter phone number"); alert.setTimeout(5000); display.setCurrent(alert); } else if(number.getString().length()>11){ Alert alert=new Alert("error "); alert.setString("invalid number"); alert.setTimeout(5000); display.setCurrent(alert); } else{ Alert alert=new Alert("error "); alert.setString("success"); alert.setTimeout(5000); display.setCurrent(alert); } } void showMessage(String message, Displayable displayable) { Alert alert = new Alert(""); alert.setTitle("Error"); alert.setString(message); alert.setType(AlertType.ERROR); alert.setTimeout(5000); display.setCurrent(alert, displayable); } //<editor-fold defaultstate="collapsed" desc=" Generated Methods ">//GEN-BEGIN:|methods|0| //</editor-fold>//GEN-END:|methods|0| //<editor-fold defaultstate="collapsed" desc=" Generated Method: initialize ">//GEN-BEGIN:|0-initialize|0|0-preInitialize /** * Initializes the application. It is called only once when the MIDlet is * started. The method is called before the * <code>startMIDlet</code> method. */ private void initialize() {//GEN-END:|0-initialize|0|0-preInitialize // write pre-initialize user code here //GEN-LINE:|0-initialize|1|0-postInitialize // write post-initialize user code here }//GEN-BEGIN:|0-initialize|2| //</editor-fold>//GEN-END:|0-initialize|2| //<editor-fold defaultstate="collapsed" desc=" Generated Method: startMIDlet ">//GEN-BEGIN:|3-startMIDlet|0|3-preAction /** * Performs an action assigned to the Mobile Device - MIDlet Started point. */ public void startMIDlet() {//GEN-END:|3-startMIDlet|0|3-preAction // write pre-action user code here switchDisplayable(null, getForm());//GEN-LINE:|3-startMIDlet|1|3-postAction // write post-action user code here form.setCommandListener(this); form.setItemStateListener(this); }//GEN-BEGIN:|3-startMIDlet|2| //</editor-fold>//GEN-END:|3-startMIDlet|2| //<editor-fold defaultstate="collapsed" desc=" Generated Method: resumeMIDlet ">//GEN-BEGIN:|4-resumeMIDlet|0|4-preAction /** * Performs an action assigned to the Mobile Device - MIDlet Resumed point. */ public void resumeMIDlet() {//GEN-END:|4-resumeMIDlet|0|4-preAction // write pre-action user code here //GEN-LINE:|4-resumeMIDlet|1|4-postAction // write post-action user code here }//GEN-BEGIN:|4-resumeMIDlet|2| //</editor-fold>//GEN-END:|4-resumeMIDlet|2| //<editor-fold defaultstate="collapsed" desc=" Generated Method: switchDisplayable ">//GEN-BEGIN:|5-switchDisplayable|0|5-preSwitch /** * Switches a current displayable in a display. The * <code>display</code> instance is taken from * <code>getDisplay</code> method. This method is used by all actions in the * design for switching displayable. * * @param alert the Alert which is temporarily set to the display; if * <code>null</code>, then * <code>nextDisplayable</code> is set immediately * @param nextDisplayable the Displayable to be set */ public void switchDisplayable(Alert alert, Displayable nextDisplayable) {//GEN-END:|5-switchDisplayable|0|5-preSwitch // write pre-switch user code here Display display = getDisplay();//GEN-BEGIN:|5-switchDisplayable|1|5-postSwitch if (alert == null) { display.setCurrent(nextDisplayable); } else { display.setCurrent(alert, nextDisplayable); }//GEN-END:|5-switchDisplayable|1|5-postSwitch // write post-switch user code here }//GEN-BEGIN:|5-switchDisplayable|2| //</editor-fold>//GEN-END:|5-switchDisplayable|2| //<editor-fold defaultstate="collapsed" desc=" Generated Method: commandAction for Displayables ">//GEN-BEGIN:|7-commandAction|0|7-preCommandAction /** * Called by a system to indicated that a command has been invoked on a * particular displayable. * * @param command the Command that was invoked * @param displayable the Displayable where the command was invoked */ public void commandAction(Command command, Displayable displayable) {//GEN-END:|7-commandAction|0|7-preCommandAction // write pre-action user code here if (displayable == form) {//GEN-BEGIN:|7-commandAction|1|16-preAction if (command == exit) {//GEN-END:|7-commandAction|1|16-preAction // write pre-action user code here exitMIDlet();//GEN-LINE:|7-commandAction|2|16-postAction // write post-action user code here } else if (command == sendMesg) {//GEN-LINE:|7-commandAction|3|18-preAction // write pre-action user code here String mno=number.getString(); String msg=textUrdu.getString(); if(mno.equals("")) { alert = new Alert("Alert"); alert.setString("Enter Mobile Number!!!"); alert.setTimeout(2000); display.setCurrent(alert); } else { try { clientConn=(MessageConnection)Connector.open("sms://"+mno); } catch(Exception e) { alert = new Alert("Alert"); alert.setString("Unable to connect to Station because of network problem"); alert.setTimeout(2000); display.setCurrent(alert); } try { TextMessage textmessage = (TextMessage) clientConn.newMessage(MessageConnection.TEXT_MESSAGE); textmessage.setAddress("sms://"+mno); textmessage.setPayloadText(msg); clientConn.send(textmessage); } catch(Exception e) { Alert alert=new Alert("Alert","",null,AlertType.INFO); alert.setTimeout(Alert.FOREVER); alert.setString("Unable to send"); display.setCurrent(alert); } } //GEN-LINE:|7-commandAction|4|18-postAction // write post-action user code here }//GEN-BEGIN:|7-commandAction|5|7-postCommandAction }//GEN-END:|7-commandAction|5|7-postCommandAction // write post-action user code here }//GEN-BEGIN:|7-commandAction|6| //</editor-fold>//GEN-END:|7-commandAction|6| //<editor-fold defaultstate="collapsed" desc=" Generated Method: commandAction for Items ">//GEN-BEGIN:|8-itemCommandAction|0|8-preItemCommandAction /** * Called by a system to indicated that a command has been invoked on a * particular item. * * @param command the Command that was invoked * @param displayable the Item where the command was invoked */ public void commandAction(Command command, Item item) {//GEN-END:|8-itemCommandAction|0|8-preItemCommandAction // write pre-action user code here if (item == number) {//GEN-BEGIN:|8-itemCommandAction|1|21-preAction if (command == add) {//GEN-END:|8-itemCommandAction|1|21-preAction // write pre-action user code here //GEN-LINE:|8-itemCommandAction|2|21-postAction // write post-action user code here }//GEN-BEGIN:|8-itemCommandAction|3|28-preAction } else if (item == send) { if (command == select) {//GEN-END:|8-itemCommandAction|3|28-preAction // write pre-action user code here //GEN-LINE:|8-itemCommandAction|4|28-postAction // write post-action user code here }//GEN-BEGIN:|8-itemCommandAction|5|24-preAction } else if (item == textUrdu) { if (command == urdu) {//GEN-END:|8-itemCommandAction|5|24-preAction // write pre-action user code here if (isUrdu) isUrdu = false; else { isUrdu = true; TextField tf = (TextField)item; } //GEN-LINE:|8-itemCommandAction|6|24-postAction // write post-action user code here }//GEN-BEGIN:|8-itemCommandAction|7|8-postItemCommandAction }//GEN-END:|8-itemCommandAction|7|8-postItemCommandAction // write post-action user code here }//GEN-BEGIN:|8-itemCommandAction|8| //</editor-fold>//GEN-END:|8-itemCommandAction|8| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: form ">//GEN-BEGIN:|14-getter|0|14-preInit /** * Returns an initialized instance of form component. * * @return the initialized component instance */ public Form getForm() { if (form == null) {//GEN-END:|14-getter|0|14-preInit // write pre-init user code here form = new Form("form", new Item[]{getNumber(), getTextUrdu(), getStringItem(), getSend()});//GEN-BEGIN:|14-getter|1|14-postInit form.addCommand(getExit()); form.addCommand(getSendMesg()); form.setCommandListener(this);//GEN-END:|14-getter|1|14-postInit // write post-init user code here form.setItemStateListener(this); // form.setCommandListener(this); }//GEN-BEGIN:|14-getter|2| return form; } //</editor-fold>//GEN-END:|14-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: number ">//GEN-BEGIN:|19-getter|0|19-preInit /** * Returns an initialized instance of number component. * * @return the initialized component instance */ public TextField getNumber() { if (number == null) {//GEN-END:|19-getter|0|19-preInit // write pre-init user code here number = new TextField("Number ", null, 11, TextField.PHONENUMBER);//GEN-BEGIN:|19-getter|1|19-postInit number.addCommand(getAdd()); number.setItemCommandListener(this); number.setDefaultCommand(getAdd());//GEN-END:|19-getter|1|19-postInit // write post-init user code here }//GEN-BEGIN:|19-getter|2| return number; } //</editor-fold>//GEN-END:|19-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: textUrdu ">//GEN-BEGIN:|22-getter|0|22-preInit /** * Returns an initialized instance of textUrdu component. * * @return the initialized component instance */ public TextField getTextUrdu() { if (textUrdu == null) {//GEN-END:|22-getter|0|22-preInit // write pre-init user code here textUrdu = new TextField("Message", null, 2000, TextField.ANY);//GEN-BEGIN:|22-getter|1|22-postInit textUrdu.addCommand(getUrdu()); textUrdu.setItemCommandListener(this);//GEN-END:|22-getter|1|22-postInit // write post-init user code here }//GEN-BEGIN:|22-getter|2| return textUrdu; } //</editor-fold>//GEN-END:|22-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: exit ">//GEN-BEGIN:|15-getter|0|15-preInit /** * Returns an initialized instance of exit component. * * @return the initialized component instance */ public Command getExit() { if (exit == null) {//GEN-END:|15-getter|0|15-preInit // write pre-init user code here exit = new Command("Exit", Command.EXIT, 0);//GEN-LINE:|15-getter|1|15-postInit // write post-init user code here }//GEN-BEGIN:|15-getter|2| return exit; } //</editor-fold>//GEN-END:|15-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: sendMesg ">//GEN-BEGIN:|17-getter|0|17-preInit /** * Returns an initialized instance of sendMesg component. * * @return the initialized component instance */ public Command getSendMesg() { if (sendMesg == null) {//GEN-END:|17-getter|0|17-preInit // write pre-init user code here sendMesg = new Command("send", Command.OK, 0);//GEN-LINE:|17-getter|1|17-postInit // write post-init user code here }//GEN-BEGIN:|17-getter|2| return sendMesg; } //</editor-fold>//GEN-END:|17-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: add ">//GEN-BEGIN:|20-getter|0|20-preInit /** * Returns an initialized instance of add component. * * @return the initialized component instance */ public Command getAdd() { if (add == null) {//GEN-END:|20-getter|0|20-preInit // write pre-init user code here add = new Command("add", Command.OK, 0);//GEN-LINE:|20-getter|1|20-postInit // write post-init user code here }//GEN-BEGIN:|20-getter|2| return add; } //</editor-fold>//GEN-END:|20-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: urdu ">//GEN-BEGIN:|23-getter|0|23-preInit /** * Returns an initialized instance of urdu component. * * @return the initialized component instance */ public Command getUrdu() { if (urdu == null) {//GEN-END:|23-getter|0|23-preInit // write pre-init user code here urdu = new Command("urdu", Command.OK, 0);//GEN-LINE:|23-getter|1|23-postInit // write post-init user code here }//GEN-BEGIN:|23-getter|2| return urdu; } //</editor-fold>//GEN-END:|23-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: stringItem ">//GEN-BEGIN:|25-getter|0|25-preInit /** * Returns an initialized instance of stringItem component. * * @return the initialized component instance */ public StringItem getStringItem() { if (stringItem == null) {//GEN-END:|25-getter|0|25-preInit // write pre-init user code here stringItem = new StringItem("string", null);//GEN-LINE:|25-getter|1|25-postInit // write post-init user code here }//GEN-BEGIN:|25-getter|2| return stringItem; } //</editor-fold>//GEN-END:|25-getter|2| //</editor-fold> //<editor-fold defaultstate="collapsed" desc=" Generated Getter: send ">//GEN-BEGIN:|26-getter|0|26-preInit /** * Returns an initialized instance of send component. * * @return the initialized component instance */ public StringItem getSend() { if (send == null) {//GEN-END:|26-getter|0|26-preInit // write pre-init user code here send = new StringItem("", "send", Item.BUTTON);//GEN-BEGIN:|26-getter|1|26-postInit send.addCommand(getSelect()); send.setItemCommandListener(this); send.setDefaultCommand(getSelect());//GEN-END:|26-getter|1|26-postInit // write post-init user code here }//GEN-BEGIN:|26-getter|2| return send; } //</editor-fold>//GEN-END:|26-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: select ">//GEN-BEGIN:|27-getter|0|27-preInit /** * Returns an initialized instance of select component. * * @return the initialized component instance */ public Command getSelect() { if (select == null) {//GEN-END:|27-getter|0|27-preInit // write pre-init user code here select = new Command("select", Command.OK, 0);//GEN-LINE:|27-getter|1|27-postInit // write post-init user code here }//GEN-BEGIN:|27-getter|2| return select; } //</editor-fold>//GEN-END:|27-getter|2| //<editor-fold defaultstate="collapsed" desc=" Generated Getter: task ">//GEN-BEGIN:|40-getter|0|40-preInit /** * Returns an initialized instance of task component. * * @return the initialized component instance */ public SimpleCancellableTask getTask() { if (task == null) {//GEN-END:|40-getter|0|40-preInit // write pre-init user code here task = new SimpleCancellableTask();//GEN-BEGIN:|40-getter|1|40-execute task.setExecutable(new org.netbeans.microedition.util.Executable() { public void execute() throws Exception {//GEN-END:|40-getter|1|40-execute // write task-execution user code here }//GEN-BEGIN:|40-getter|2|40-postInit });//GEN-END:|40-getter|2|40-postInit // write post-init user code here }//GEN-BEGIN:|40-getter|3| return task; } //</editor-fold>//GEN-END:|40-getter|3| /** * Returns a display instance. * @return the display instance. */ public Display getDisplay () { return Display.getDisplay(this); } /** * Exits MIDlet. */ public void exitMIDlet() { switchDisplayable (null, null); destroyApp(true); notifyDestroyed(); } /** * Called when MIDlet is started. * Checks whether the MIDlet have been already started and initialize/starts or resumes the MIDlet. */ public void startApp() { startMIDlet(); display.setCurrent(form); } /** * Called when MIDlet is paused. */ public void pauseApp() { midletPaused = true; } /** * Called to signal the MIDlet to terminate. * @param unconditional if true, then the MIDlet has to be unconditionally terminated and all resources has to be released. */ public void destroyApp(boolean unconditional) { } public void itemStateChanged(Item item) { if (item == textUrdu) { if (isUrdu) { stringItem.setText("urdu"); TextField tf = (TextField)item; String s = tf.getString(); char ch = s.charAt(s.length() - 1); s = s.substring(0, s.length() - 1); ch = Urdu.ToUrdu(ch); s = s + String.valueOf(ch); tf.setString(""); tf.setString(s); }//end if throw new UnsupportedOperationException("Not supported yet."); } } }

    Read the article

  • I am having a problem of class cast exception. Can anyone please help me out?

    - by Piyush
    This is my code: package com.example.userpage; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; public class UserPage extends Activity { String tv,tv1; EditText name,pass; TextView x,y; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Button button = (Button) findViewById(R.id.widget44); button.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { name.setText(" "); pass.setText(" "); } }); x = (TextView) findViewById(R.id.widget46); y = (TextView) findViewById(R.id.widget47); name = (EditText)findViewById(R.id.widget41); pass = (EditText)findViewById(R.id.widget42); Button button1 = (Button) findViewById(R.id.widget45); button1.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { tv= name.getText().toString(); tv1 = pass.getText().toString(); x.setText(tv); y.setText(tv1); } }); } } And this is my log cat: 02-16 12:24:30.488: DEBUG/AndroidRuntime(973): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:24:30.488: DEBUG/AndroidRuntime(973): CheckJNI is ON 02-16 12:24:31.208: DEBUG/AndroidRuntime(973): --- registering native functions --- 02-16 12:24:33.498: DEBUG/AndroidRuntime(973): Shutting down VM 02-16 12:24:33.537: DEBUG/dalvikvm(973): Debugger has detached; object registry had 1 entries 02-16 12:24:33.537: INFO/AndroidRuntime(973): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:24:34.917: DEBUG/AndroidRuntime(981): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:24:34.927: DEBUG/AndroidRuntime(981): CheckJNI is ON 02-16 12:24:35.617: DEBUG/AndroidRuntime(981): --- registering native functions --- 02-16 12:24:38.029: INFO/ActivityManager(67): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.userpage/.UserPage } 02-16 12:24:38.129: DEBUG/AndroidRuntime(981): Shutting down VM 02-16 12:24:38.160: DEBUG/dalvikvm(981): Debugger has detached; object registry had 1 entries 02-16 12:24:38.168: INFO/AndroidRuntime(981): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:25:12.028: DEBUG/AndroidRuntime(990): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:25:12.038: DEBUG/AndroidRuntime(990): CheckJNI is ON 02-16 12:25:12.708: DEBUG/AndroidRuntime(990): --- registering native functions --- 02-16 12:25:15.178: DEBUG/dalvikvm(176): GC_EXPLICIT freed 114 objects / 5880 bytes in 115ms 02-16 12:25:15.318: DEBUG/PackageParser(67): Scanning package: /data/app/vmdl25170.tmp 02-16 12:25:15.588: INFO/PackageManager(67): Removing non-system package:com.example.userpage 02-16 12:25:15.597: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:25:15.648: INFO/Process(67): Sending signal. PID: 916 SIG: 9 02-16 12:25:15.877: INFO/UsageStats(67): Unexpected resume of com.android.launcher while already resumed in com.example.userpage 02-16 12:25:17.028: WARN/InputManagerService(67): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@4400ecf8 02-16 12:25:17.928: DEBUG/PackageManager(67): Scanning package com.example.userpage 02-16 12:25:17.949: INFO/PackageManager(67): Package com.example.userpage codePath changed from /data/app/com.example.userpage-1.apk to /data/app/com.example.userpage-2.apk; Retaining data and using new 02-16 12:25:17.987: INFO/PackageManager(67): /data/app/com.example.userpage-2.apk changed; unpacking 02-16 12:25:18.037: DEBUG/installd(35): DexInv: --- BEGIN '/data/app/com.example.userpage-2.apk' --- 02-16 12:25:18.737: DEBUG/dalvikvm(997): DexOpt: load 81ms, verify 112ms, opt 6ms 02-16 12:25:18.768: DEBUG/installd(35): DexInv: --- END '/data/app/com.example.userpage-2.apk' (success) --- 02-16 12:25:18.799: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:25:18.808: WARN/PackageManager(67): Code path for pkg : com.example.userpage changing from /data/app/com.example.userpage-1.apk to /data/app/com.example.userpage-2.apk 02-16 12:25:18.839: WARN/PackageManager(67): Resource path for pkg : com.example.userpage changing from /data/app/com.example.userpage-1.apk to /data/app/com.example.userpage-2.apk 02-16 12:25:18.868: DEBUG/PackageManager(67): Activities: com.example.userpage.UserPage 02-16 12:25:19.297: INFO/installd(35): move /data/dalvik-cache/data@[email protected]@classes.dex -> /data/dalvik-cache/data@[email protected]@classes.dex 02-16 12:25:19.297: DEBUG/PackageManager(67): New package installed in /data/app/com.example.userpage-2.apk 02-16 12:25:19.598: DEBUG/dalvikvm(67): GC_FOR_MALLOC freed 7979 objects / 516856 bytes in 246ms 02-16 12:25:20.498: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:25:20.708: DEBUG/dalvikvm(129): GC_EXPLICIT freed 124 objects / 5672 bytes in 157ms 02-16 12:25:21.838: DEBUG/dalvikvm(67): GC_EXPLICIT freed 4208 objects / 236264 bytes in 419ms 02-16 12:25:21.918: WARN/RecognitionManagerService(67): no available voice recognition services found 02-16 12:25:22.127: INFO/installd(35): unlink /data/dalvik-cache/data@[email protected]@classes.dex 02-16 12:25:22.478: DEBUG/AndroidRuntime(990): Shutting down VM 02-16 12:25:22.488: DEBUG/dalvikvm(990): Debugger has detached; object registry had 1 entries 02-16 12:25:22.588: INFO/AndroidRuntime(990): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:25:24.137: DEBUG/AndroidRuntime(1003): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:25:24.147: DEBUG/AndroidRuntime(1003): CheckJNI is ON 02-16 12:25:24.817: DEBUG/AndroidRuntime(1003): --- registering native functions --- 02-16 12:25:27.450: INFO/ActivityManager(67): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.userpage/.UserPage } 02-16 12:25:27.628: DEBUG/AndroidRuntime(1003): Shutting down VM 02-16 12:25:27.780: INFO/AndroidRuntime(1003): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:25:28.018: DEBUG/dalvikvm(1003): Debugger has detached; object registry had 1 entries 02-16 12:25:28.148: INFO/ActivityManager(67): Start proc com.example.userpage for activity com.example.userpage/.UserPage: pid=1010 uid=10036 gids={} 02-16 12:25:30.308: DEBUG/AndroidRuntime(1010): Shutting down VM 02-16 12:25:30.308: WARN/dalvikvm(1010): threadid=1: thread exiting with uncaught exception (group=0x4001d800) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): FATAL EXCEPTION: main 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.userpage/com.example.userpage.UserPage}: java.lang.ClassCastException: android.widget.TextView 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2679) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.access$2300(ActivityThread.java:125) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2033) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.os.Handler.dispatchMessage(Handler.java:99) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.os.Looper.loop(Looper.java:123) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.main(ActivityThread.java:4627) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at java.lang.reflect.Method.invokeNative(Native Method) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at java.lang.reflect.Method.invoke(Method.java:521) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at dalvik.system.NativeStart.main(Native Method) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): Caused by: java.lang.ClassCastException: android.widget.TextView 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at com.example.userpage.UserPage.onCreate(UserPage.java:35) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2627) 02-16 12:25:30.388: ERROR/AndroidRuntime(1010): ... 11 more 02-16 12:25:30.438: WARN/ActivityManager(67): Force finishing activity com.example.userpage/.UserPage 02-16 12:25:31.088: WARN/ActivityManager(67): Activity pause timeout for HistoryRecord{43f164f8 com.example.userpage/.UserPage} 02-16 12:25:32.588: DEBUG/dalvikvm(292): GC_EXPLICIT freed 46 objects / 2240 bytes in 6282ms 02-16 12:25:35.267: INFO/Process(1010): Sending signal. PID: 1010 SIG: 9 02-16 12:25:35.468: WARN/InputManagerService(67): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@43e60a90 02-16 12:25:35.900: INFO/ActivityManager(67): Process com.example.userpage (pid 1010) has died. 02-16 12:25:38.278: DEBUG/dalvikvm(176): GC_EXPLICIT freed 172 objects / 12280 bytes in 127ms 02-16 12:25:43.011: WARN/ActivityManager(67): Activity destroy timeout for HistoryRecord{43f164f8 com.example.userpage/.UserPage} 02-16 12:28:12.698: DEBUG/AndroidRuntime(1019): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:28:12.711: DEBUG/AndroidRuntime(1019): CheckJNI is ON 02-16 12:28:13.367: DEBUG/AndroidRuntime(1019): --- registering native functions --- 02-16 12:28:15.998: DEBUG/dalvikvm(176): GC_EXPLICIT freed 114 objects / 5888 bytes in 183ms 02-16 12:28:16.539: DEBUG/PackageParser(67): Scanning package: /data/app/vmdl25171.tmp 02-16 12:28:16.867: INFO/PackageManager(67): Removing non-system package:com.example.userpage 02-16 12:28:16.867: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:28:17.277: DEBUG/PackageManager(67): Scanning package com.example.userpage 02-16 12:28:17.308: INFO/PackageManager(67): Package com.example.userpage codePath changed from /data/app/com.example.userpage-2.apk to /data/app/com.example.userpage-1.apk; Retaining data and using new 02-16 12:28:17.328: INFO/PackageManager(67): /data/app/com.example.userpage-1.apk changed; unpacking 02-16 12:28:17.367: DEBUG/installd(35): DexInv: --- BEGIN '/data/app/com.example.userpage-1.apk' --- 02-16 12:28:18.357: DEBUG/dalvikvm(1026): DexOpt: load 85ms, verify 114ms, opt 6ms 02-16 12:28:18.398: DEBUG/installd(35): DexInv: --- END '/data/app/com.example.userpage-1.apk' (success) --- 02-16 12:28:18.428: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:28:18.438: WARN/PackageManager(67): Code path for pkg : com.example.userpage changing from /data/app/com.example.userpage-2.apk to /data/app/com.example.userpage-1.apk 02-16 12:28:18.477: WARN/PackageManager(67): Resource path for pkg : com.example.userpage changing from /data/app/com.example.userpage-2.apk to /data/app/com.example.userpage-1.apk 02-16 12:28:18.477: DEBUG/PackageManager(67): Activities: com.example.userpage.UserPage 02-16 12:28:18.977: INFO/installd(35): move /data/dalvik-cache/data@[email protected]@classes.dex -> /data/dalvik-cache/data@[email protected]@classes.dex 02-16 12:28:18.988: DEBUG/PackageManager(67): New package installed in /data/app/com.example.userpage-1.apk 02-16 12:28:19.528: DEBUG/dalvikvm(67): GC_FOR_MALLOC freed 6733 objects / 459728 bytes in 211ms 02-16 12:28:20.138: INFO/ActivityManager(67): Force stopping package com.example.userpage uid=10036 02-16 12:28:20.368: DEBUG/dalvikvm(129): GC_EXPLICIT freed 892 objects / 48744 bytes in 175ms 02-16 12:28:21.317: WARN/RecognitionManagerService(67): no available voice recognition services found 02-16 12:28:22.827: DEBUG/dalvikvm(67): GC_EXPLICIT freed 3877 objects / 241128 bytes in 452ms 02-16 12:28:22.979: INFO/installd(35): unlink /data/dalvik-cache/data@[email protected]@classes.dex 02-16 12:28:23.277: DEBUG/AndroidRuntime(1019): Shutting down VM 02-16 12:28:23.307: DEBUG/dalvikvm(1019): Debugger has detached; object registry had 1 entries 02-16 12:28:23.328: INFO/AndroidRuntime(1019): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:28:24.989: DEBUG/AndroidRuntime(1032): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 02-16 12:28:24.989: DEBUG/AndroidRuntime(1032): CheckJNI is ON 02-16 12:28:25.888: DEBUG/AndroidRuntime(1032): --- registering native functions --- 02-16 12:28:28.588: INFO/ActivityManager(67): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.example.userpage/.UserPage } 02-16 12:28:28.888: DEBUG/AndroidRuntime(1032): Shutting down VM 02-16 12:28:28.997: DEBUG/dalvikvm(1032): Debugger has detached; object registry had 1 entries 02-16 12:28:29.038: INFO/AndroidRuntime(1032): NOTE: attach of thread 'Binder Thread #3' failed 02-16 12:28:30.417: INFO/ActivityManager(67): Start proc com.example.userpage for activity com.example.userpage/.UserPage: pid=1039 uid=10036 gids={} 02-16 12:28:32.588: DEBUG/AndroidRuntime(1039): Shutting down VM 02-16 12:28:32.598: WARN/dalvikvm(1039): threadid=1: thread exiting with uncaught exception (group=0x4001d800) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): FATAL EXCEPTION: main 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.userpage/com.example.userpage.UserPage}: java.lang.ClassCastException: android.widget.TextView 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2679) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.access$2300(ActivityThread.java:125) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2033) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.os.Handler.dispatchMessage(Handler.java:99) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.os.Looper.loop(Looper.java:123) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.main(ActivityThread.java:4627) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at java.lang.reflect.Method.invokeNative(Native Method) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at java.lang.reflect.Method.invoke(Method.java:521) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at dalvik.system.NativeStart.main(Native Method) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): Caused by: java.lang.ClassCastException: android.widget.TextView 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at com.example.userpage.UserPage.onCreate(UserPage.java:34) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2627) 02-16 12:28:32.648: ERROR/AndroidRuntime(1039): ... 11 more 02-16 12:28:32.698: WARN/ActivityManager(67): Force finishing activity com.example.userpage/.UserPage 02-16 12:28:32.967: DEBUG/dalvikvm(292): GC_EXPLICIT freed 46 objects / 2240 bytes in 6840ms 02-16 12:28:33.247: WARN/ActivityManager(67): Activity pause timeout for HistoryRecord{43ee7b70 com.example.userpage/.UserPage} 02-16 12:28:36.947: INFO/Process(1039): Sending signal. PID: 1039 SIG: 9 02-16 12:28:37.017: INFO/ActivityManager(67): Process com.example.userpage (pid 1039) has died. 02-16 12:28:37.128: WARN/InputManagerService(67): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@43e872f8 02-16 12:28:42.087: DEBUG/dalvikvm(176): GC_EXPLICIT freed 156 objects / 11488 bytes in 145ms 02-16 12:28:45.391: WARN/ActivityManager(67): Activity destroy timeout for HistoryRecord{43ee7b70 com.example.userpage/.UserPage} 02-16 12:28:47.177: DEBUG/SntpClient(67): request time failed: java.net.SocketException: Address family not supported by protocol

    Read the article

  • Using Unity – Part 3

    - by nmarun
    The previous blog was about registering and invoking different types dynamically. In this one I’d like to show how Unity manages/disposes the instances – say hello to Lifetime Managers. When a type gets registered, either through the config file or when RegisterType method is explicitly called, the default behavior is that the container uses a transient lifetime manager. In other words, the unity container creates a new instance of the type when Resolve or ResolveAll method is called. Whereas, when you register an existing object using the RegisterInstance method, the container uses a container controlled lifetime manager - a singleton pattern. It does this by storing the reference of the object and that means so as long as the container is ‘alive’, your registered instance does not go out of scope and will be disposed only after the container either goes out of scope or when the code explicitly disposes the container. Let’s see how we can use these and test if something is a singleton or a transient instance. Continuing on the same solution used in the previous blogs, I have made the following changes: First is to add typeAlias elements for TransientLifetimeManager type: 1: <typeAlias alias="transient" type="Microsoft.Practices.Unity.TransientLifetimeManager, Microsoft.Practices.Unity"/> You then need to tell what type(s) you want to be transient by nature: 1: <type type="IProduct" mapTo="Product2"> 2: <lifetime type="transient" /> 3: </type> 4: <!--<type type="IProduct" mapTo="Product2" />--> The lifetime element’s type attribute matches with the alias attribute of the typeAlias element. Now since ‘transient’ is the default behavior, you can have a concise version of the same as line 4 shows. Also note that I’ve changed the mapTo attribute from ‘Product’ to ‘Product2’. I’ve done this to help understand the transient nature of the instance of the type Product2. By making this change, you are basically saying when a type of IProduct needs to be resolved, Unity should create an instance of Product2 by default. 1: public string WriteProductDetails() 2: { 3: return string.Format("Name: {0}<br/>Category: {1}<br/>Mfg Date: {2}<br/>Hash Code: {3}", 4: Name, Category, MfgDate.ToString("MM/dd/yyyy hh:mm:ss tt"), GetHashCode()); 5: } Again, the above change is purely for the purpose of making the example more clear to understand. The display will show the full date and also displays the hash code of the current instance. The GetHashCode() method returns an integer when an instance gets created – a new integer for every instance. When you run the application, you’ll see something like the below: Now when you click on the ‘Get Product2 Instance’ button, you’ll see that the Mfg Date (which is set in the constructor) and the Hash Code are different from the one created on page load. This proves to us that a new instance is created every single time. To make this a singleton, we need to add a type alias for the ContainerControlledLifetimeManager class and then change the type attribute of the lifetime element to singleton. 1: <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="singleton" /> 5: </type> Running the application now gets me the following output: Click on the button below and you’ll see that the Mfg Date and the Hash code remain unchanged => the unity container is storing the reference the first time it is created and then returns the same instance every time the type needs to be resolved. Digging more deeper into this, Unity provides more than the two lifetime managers. ExternallyControlledLifetimeManager – maintains a weak reference to type mappings and instances. Unity returns the same instance as long as the some code is holding a strong reference to this instance. For this, you need: 1: <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="external" /> 5: </type> PerThreadLifetimeManager – Unity returns a unique instance of an object for each thread – so this effectively is a singleton behavior on a  per-thread basis. 1: <typeAlias alias="perThread" type="Microsoft.Practices.Unity.PerThreadLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="perThread" /> 5: </type> One thing to note about this is that if you use RegisterInstance method to register an existing object, this instance will be returned for every thread, making this a purely singleton behavior. Needless to say, this type of lifetime management is useful in multi-threaded applications (duh!!). I hope this blog provided some basics on lifetime management of objects resolved in Unity and in the next blog, I’ll talk about Injection. Please see the code used here.

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • Insert Record by Drag & Drop from ADF Tree to ADF Tree Table

    - by arul.wilson(at)oracle.com
    If you want to create record based on the values Dragged from ADF Tree and Dropped on a ADF Tree Table, then here you go.UseCase DescriptionUser Drags a tree node from ADF Tree and Drops it on a ADF Tree Table node. A new row gets added in the Tree Table based on the source tree node, subsequently a record gets added to the database table on which Tree table in based on.Following description helps to achieve this using ADF BC.Run the DragDropSchema.sql to create required tables.Create Business Components from tables (PRODUCTS, COMPONENTS, SUB_COMPONENTS, USERS, USER_COMPONENTS) created above.Add custom method to App Module Impl, this method will be used to insert record from view layer.   public String createUserComponents(String p_bugdbId, String p_productId, String p_componentId, String p_subComponentId){    Row newUserComponentsRow = this.getUserComponentsView1().createRow();    try {      newUserComponentsRow.setAttribute("Bugdbid", p_bugdbId);      newUserComponentsRow.setAttribute("ProductId", new oracle.jbo.domain.Number(p_productId));      newUserComponentsRow.setAttribute("Component1", p_componentId);      newUserComponentsRow.setAttribute("SubComponent", p_subComponentId);    } catch (Exception e) {        e.printStackTrace();        return "Failure";    }        return "Success";  }Expose this method to client interface.To display the root node we need a custom VO which can be achieved using below query. SELECT Users.ACTIVE, Users.BUGDB_ID, Users.EMAIL, Users.FIRSTNAME, Users.GLOBAL_ID, Users.LASTNAME, Users.MANAGER_ID, Users.MANAGER_PRIVILEGEFROM USERS UsersWHERE Users.MANAGER_ID is NULLCreate VL between UsersView and UsersRootNodeView VOs.Drop ProductsView from DC as ADF Tree to jspx page.Add Tree Level Rule based on ComponentsView and SubComponentsView.Drop UsersRootNodeView as ADF Tree TableAdd Tree Level Rules based on UserComponentsView and UsersView.Add DragSource to ADF Tree and CollectionDropTarget to ADF Tree Table respectively.Bind CollectionDropTarget's DropTarget to backing bean and implement method of signature DnDAction (DropEvent), this method gets invoked when Tree Table encounters a drop action, here details required for creating new record are captured from the drag source and passed to 'createUserComponents' method. public DnDAction onTreeDrop(DropEvent dropEvent) {      String newBugdbId = "";      String msgtxt="";            try {          // Getting the target node bugdb id          Object serverRowKey = dropEvent.getDropSite();          if (serverRowKey != null) {                  //Code for Tree Table as target              String dropcomponent = dropEvent.getDropComponent().toString();              dropcomponent = (String)dropcomponent.subSequence(0, dropcomponent.indexOf("["));              if (dropcomponent.equals("RichTreeTable")){                RichTreeTable richTreeTable = (RichTreeTable)dropEvent.getDropComponent();                richTreeTable.setRowKey(serverRowKey);                int rowIndexTreeTable = richTreeTable.getRowIndex();                //Drop Target Logic                if (((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getAttributeValue()==null) {                  //Get Parent                  newBugdbId = (String)((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getParent().getAttributeValue();                } else {                  if (isNum(((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getAttributeValue().toString())) {                    //Get Parent's parent                              newBugdbId = (String)((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getParent().getParent().getAttributeValue();                  } else{                      //Dropped on USER                                          newBugdbId = (String)((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getAttributeValue();                  }                  }              }           }                     DataFlavor<RowKeySet> df = DataFlavor.getDataFlavor(RowKeySet.class);          RowKeySet droppedValue = dropEvent.getTransferable().getData(df);            Object[] keys = droppedValue.toArray();          Key componentKey = null;          Key subComponentKey = null;           // binding for createUserComponents method defined in AppModuleImpl class  to insert record in database.                      operationBinding = bindings.getOperationBinding("createUserComponents");            // get the Product, Component, Subcomponent details and insert to UserComponents table.          // loop through the keys if more than one comp/subcomponent is select.                   for (int i = 0; i < keys.length; i++) {                  System.out.println("in for :"+i);              List list = (List)keys[i];                  System.out.println("list "+i+" : "+list);              System.out.println("list size "+list.size());              if (list.size() == 1) {                                // we cannot drag and drop  the highest node !                                msgtxt="You cannot drop Products, please drop Component or SubComponent from the Tree.";                  System.out.println(msgtxt);                                this.showInfoMessage(msgtxt);              } else {                  if (list.size() == 2) {                    // were doing the first branch, in this case all components.                    componentKey = (Key)list.get(1);                    Object[] droppedProdCompValues = componentKey.getAttributeValues();                    operationBinding.getParamsMap().put("p_bugdbId",newBugdbId);                    operationBinding.getParamsMap().put("p_productId",droppedProdCompValues[0]);                    operationBinding.getParamsMap().put("p_componentId",droppedProdCompValues[1]);                    operationBinding.getParamsMap().put("p_subComponentId","ALL");                    Object result = operationBinding.execute();              } else {                    subComponentKey = (Key)list.get(2);                    Object[] droppedProdCompSubCompValues = subComponentKey.getAttributeValues();                    operationBinding.getParamsMap().put("p_bugdbId",newBugdbId);                    operationBinding.getParamsMap().put("p_productId",droppedProdCompSubCompValues[0]);                    operationBinding.getParamsMap().put("p_componentId",droppedProdCompSubCompValues[1]);                    operationBinding.getParamsMap().put("p_subComponentId",droppedProdCompSubCompValues[2]);                    Object result = operationBinding.execute();                  }                   }            }                        /* this.getCil1().setDisabled(false);            this.getCil1().setPartialSubmit(true); */                      return DnDAction.MOVE;        } catch (Exception ex) {          System.out.println("drop failed with : " + ex.getMessage());          ex.printStackTrace();                  /* this.getCil1().setDisabled(true); */          return DnDAction.NONE;          }    } Run jspx page and drop a Component or Subcomponent from Products Tree to UserComponents Tree Table.

    Read the article

  • Getting the innermost .NET Exception

    - by Rick Strahl
    Here's a trivial but quite useful function that I frequently need in dynamic execution of code: Finding the innermost exception when an exception occurs, because for many operations (for example Reflection invocations or Web Service calls) the top level errors returned can be rather generic. A good example - common with errors in Reflection making a method invocation - is this generic error: Exception has been thrown by the target of an invocation In the debugger it looks like this: In this case this is an AJAX callback, which dynamically executes a method (ExecuteMethod code) which in turn calls into an Amazon Web Service using the old Amazon WSE101 Web service extensions for .NET. An error occurs in the Web Service call and the innermost exception holds the useful error information which in this case points at an invalid web.config key value related to the System.Net connection APIs. The "Exception has been thrown by the target of an invocation" error is the Reflection APIs generic error message that gets fired when you execute a method dynamically and that method fails internally. The messages basically says: "Your code blew up in my face when I tried to run it!". Which of course is not very useful to tell you what actually happened. If you drill down the InnerExceptions eventually you'll get a more detailed exception that points at the original error and code that caused the exception. In the code above the actually useful exception is two innerExceptions down. In most (but not all) cases when inner exceptions are returned, it's the innermost exception that has the information that is really useful. It's of course a fairly trivial task to do this in code, but I do it so frequently that I use a small helper method for this: /// <summary> /// Returns the innermost Exception for an object /// </summary> /// <param name="ex"></param> /// <returns></returns> public static Exception GetInnerMostException(Exception ex) { Exception currentEx = ex; while (currentEx.InnerException != null) { currentEx = currentEx.InnerException; } return currentEx; } This code just loops through all the inner exceptions (if any) and assigns them to a temporary variable until there are no more inner exceptions. The end result is that you get the innermost exception returned from the original exception. It's easy to use this code then in a try/catch handler like this (from the example above) to retrieve the more important innermost exception: object result = null; string stringResult = null; try { if (parameterList != null) // use the supplied parameter list result = helper.ExecuteMethod(methodToCall,target, parameterList.ToArray(), CallbackMethodParameterType.Json,ref attr); else // grab the info out of QueryString Values or POST buffer during parameter parsing // for optimization result = helper.ExecuteMethod(methodToCall, target, null, CallbackMethodParameterType.Json, ref attr); } catch (Exception ex) { Exception activeException = DebugUtils.GetInnerMostException(ex); WriteErrorResponse(activeException.Message, ( HttpContext.Current.IsDebuggingEnabled ? ex.StackTrace : null ) ); return; } Another function that is useful to me from time to time is one that returns all inner exceptions and the original exception as an array: /// <summary> /// Returns an array of the entire exception list in reverse order /// (innermost to outermost exception) /// </summary> /// <param name="ex">The original exception to work off</param> /// <returns>Array of Exceptions from innermost to outermost</returns> public static Exception[] GetInnerExceptions(Exception ex) {     List<Exception> exceptions = new List<Exception>();     exceptions.Add(ex);       Exception currentEx = ex;     while (currentEx.InnerException != null)     {         exceptions.Add(ex);     }       // Reverse the order to the innermost is first     exceptions.Reverse();       return exceptions.ToArray(); } This function loops through all the InnerExceptions and returns them and then reverses the order of the array returning the innermost exception first. This can be useful in certain error scenarios where exceptions stack and you need to display information from more than one of the exceptions in order to create a useful error message. This is rare but certain database exceptions bury their exception info in mutliple inner exceptions and it's easier to parse through them in an array then to manually walk the exception stack. It's also useful if you need to log errors and want to see the all of the error detail from all exceptions. None of this is rocket science, but it's useful to have some helpers that make retrieval of the critical exception info trivial. Resources DebugUtils.cs utility class in the West Wind Web Toolkit© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET  

    Read the article

  • UserAppDataPath in WPF

    - by psheriff
    In Windows Forms applications you were able to get to your user's roaming profile directory very easily using the Application.UserAppDataPath property. This folder allows you to store information for your program in a custom folder specifically for your program. The format of this directory looks like this: C:\Users\YOUR NAME\AppData\Roaming\COMPANY NAME\APPLICATION NAME\APPLICATION VERSION For example, on my Windows 7 64-bit system, this folder would look like this for a Windows Forms Application: C:\Users\PSheriff\AppData\Roaming\PDSA, Inc.\WindowsFormsApplication1\1.0.0.0 For some reason Microsoft did not expose this property from the Application object of WPF applications. I guess they think that we don't need this property in WPF? Well, sometimes we still do need to get at this folder. You have two choices on how to retrieve this property. Add a reference to the System.Windows.Forms.dll to your WPF application and use this property directly. Or, you can write your own method to build the same path. If you add a reference to the System.Windows.Forms.dll you will need to use System.Windows.Forms.Application.UserAppDataPath to access this property. Create a GetUserAppDataPath Method in WPF If you want to build this path you can do so with just a few method calls in WPF using Reflection. The code below shows this fairly simple method to retrieve the same folder as shown above. C#using System.Reflection; public string GetUserAppDataPath(){  string path = string.Empty;  Assembly assm;  Type at;  object[] r;   // Get the .EXE assembly  assm = Assembly.GetEntryAssembly();  // Get a 'Type' of the AssemblyCompanyAttribute  at = typeof(AssemblyCompanyAttribute);  // Get a collection of custom attributes from the .EXE assembly  r = assm.GetCustomAttributes(at, false);  // Get the Company Attribute  AssemblyCompanyAttribute ct =                 ((AssemblyCompanyAttribute)(r[0]));  // Build the User App Data Path  path = Environment.GetFolderPath(              Environment.SpecialFolder.ApplicationData);  path += @"\" + ct.Company;  path += @"\" + assm.GetName().Version.ToString();   return path;} Visual BasicPublic Function GetUserAppDataPath() As String  Dim path As String = String.Empty  Dim assm As Assembly  Dim at As Type  Dim r As Object()   ' Get the .EXE assembly  assm = Assembly.GetEntryAssembly()  ' Get a 'Type' of the AssemblyCompanyAttribute  at = GetType(AssemblyCompanyAttribute)  ' Get a collection of custom attributes from the .EXE assembly  r = assm.GetCustomAttributes(at, False)  ' Get the Company Attribute  Dim ct As AssemblyCompanyAttribute = _                 DirectCast(r(0), AssemblyCompanyAttribute)  ' Build the User App Data Path  path = Environment.GetFolderPath( _                 Environment.SpecialFolder.ApplicationData)  path &= "\" & ct.Company  path &= "\" & assm.GetName().Version.ToString()   Return pathEnd Function Summary Getting the User Application Data Path folder in WPF is fairly simple with just a few method calls using Reflection. Of course, there is absolutely no reason you cannot just add a reference to the System.Windows.Forms.dll to your WPF application and use that Application object. After all, System.Windows.Forms.dll is a part of the .NET Framework and can be used from WPF with no issues at all. NOTE: Visit http://www.pdsa.com/downloads to get more tips and tricks like this one. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • Getting Started with Prism (aka Composite Application Guidance for WPF and Silverlight)

    - by dotneteer
    Overview Prism is a framework from the Microsoft Patterns and Practice team that allow you to create WPF and Silverlight in a modular way. It is especially valuable for larger projects in which a large number of developers can develop in parallel. Prism achieves its goal by supplying several services: · Dependency Injection (DI) and Inversion of control (IoC): By using DI, Prism takes away the responsibility of instantiating and managing the life time of dependency objects from individual components to a container. Prism relies on containers to discover, manage and compose large number of objects. By varying the configuration, the container can also inject mock objects for unit testing. Out of the box, Prism supports Unity and MEF as container although it is possible to use other containers by subclassing the Bootstrapper class. · Modularity and Region: Prism supplies the framework to split application into modules from the application shell. Each module is a library project that contains both UI and code and is responsible to initialize itself when loaded by the shell. Each window can be further divided into regions. A region is a user control with associated model. · Model, view and view-model (MVVM) pattern: Prism promotes the user MVVM. The use of DI container makes it much easier to inject model into view. WPF already has excellent data binding and commanding mechanism. To be productive with Prism, it is important to understand WPF data binding and commanding well. · Event-aggregation: Prism promotes loosely coupled components. Prism discourages for components from different modules to communicate each other, thus leading to dependency. Instead, Prism supplies an event-aggregation mechanism that allows components to publish and subscribe events without knowing each other. Architecture In the following, I will go into a little more detail on the services provided by Prism. Bootstrapper In a typical WPF application, application start-up is controls by App.xaml and its code behind. The main window of the application is typically specified in the App.xaml file. In a Prism application, we start a bootstrapper in the App class and delegate the duty of main window to the bootstrapper. The bootstrapper will start a dependency-injection container so all future object instantiations are managed by the container. Out of box, Prism provides the UnityBootstrapper and MefUnityBootstrapper abstract classes. All application needs to either provide a concrete implementation of one of these bootstrappers, or alternatively, subclass the Bootstrapper class with another DI container. A concrete bootstrapper class must implement the CreateShell method. Its responsibility is to resolve and create the Shell object through the DI container to serve as the main window for the application. The other important method to override is ConfigureModuleCatalog. The bootstrapper can register modules for the application. In a more advance scenario, an application does not have to know all its modules at compile time. Modules can be discovered at run time. Readers to refer to one of the Open Modularity Quick Starts for more information. Modules Once modules are registered with or discovered by Prism, they are instantiated by the DI container and their Initialize method is called. The DI container can inject into a module a region registry that implements IRegionViewRegistry interface. The module, in its Initialize method, can then call RegisterViewWithRegion method of the registry to register its regions. Regions Regions, once registered, are managed by the RegionManager. The shell can then load regions either through the RegionManager.RegionName attached property or dynamically through code. When a view is created by the region manager, the DI container can inject view model and other services into the view. The view then has a reference to the view model through which it can interact with backend services. Service locator Although it is possible to inject services into dependent classes through a DI container, an alternative way is to use the ServiceLocator to retrieve a service on demard. Prism supplies a service locator implementation and it is possible to get an instance of the service by calling: ServiceLocator.Current.GetInstance<IServiceType>() Event aggregator Prism supplies an IEventAggregator interface and implementation that can be injected into any class that needs to communicate with each other in a loosely-coupled fashion. The event aggregator uses a publisher/subscriber model. A class can publishes an event by calling eventAggregator.GetEvent<EventType>().Publish(parameter) to raise an event. Other classes can subscribe the event by calling eventAggregator.GetEvent<EventType>().Subscribe(EventHandler, other options). Getting started The easiest way to get started with Prism is to go through the Prism Hands-On labs and look at the Hello World QuickStart. The Hello World QuickStart shows how bootstrapper, modules and region works. Next, I would recommend you to look at the Stock Trader Reference Implementation. It is a more in depth example that resemble we want to set up an application. Several other QuickStarts cover individual Prism services. Some scenarios, such as dynamic module discovery, are more advanced. Apart from the official prism document, you can get an overview by reading Glen Block’s MSDN Magazine article. I have found the best free training material is from the Boise Code Camp. To be effective with Prism, it is important to understands key concepts of WPF well first, such as the DependencyProperty system, data binding, resource, theme and ICommand. It is also important to know your DI container of choice well. I will try to explorer these subjects in depth in the future. Testimony Recently, I worked on a desktop WPF application using Prism. I had a wonderful experience with Prism. The Prism is flexible enough even in the presence of third party controls such as Telerik WPF controls. We have never encountered any significant obstacle.

    Read the article

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • A Semantic Model For Html: TagBuilder and HtmlTags

    - by Ryan Ohs
    In this post I look into the code smell that is HTML literals and show how we can refactor these pesky strings into a friendlier and more maintainable model.   The Problem When I started writing MVC applications, I quickly realized that I built a lot of my HTML inside HtmlHelpers. As I did this, I ended up moving quite a bit of HTML into string literals inside my helper classes. As I wanted to add more attributes (such as classes) to my tags, I needed to keep adding overloads to my helpers. A good example of this end result is the default html helpers that come with the MVC framework. Too many overloads make me crazy! The problem with all these overloads is that they quickly muck up the API and nobody can remember exactly what order the parameters go in. I've seen many presenters (including members of the ASP.NET MVC team!) stumble before realizing that their view wasn't compiling because they needed one more null parameter in the call to Html.ActionLink(). What if instead of writing Html.ActionLink("Edit", "Edit", null, new { @class = "navigation" }) we could do Html.LinkToAction("Edit").Text("Edit").AddClass("navigation") ? Wouldn't that be much easier to remember and understand?  We can do this if we introduce a semantic model for building our HTML.   What is a Semantic Model? According to Martin Folwer, "a semantic model is an in-memory representation, usually an object model, of the same subject that the domain specific language describes." In our case, the model would be a set of classes that know how to render HTML. By using a semantic model we can free ourselves from dealing with strings and instead output the HTML (typically via ToString()) once we've added all the elements and attributes we desire to the model. There are two primary semantic models available in ASP.NET MVC: MVC 2.0's TagBuilder and FubuMVC's HtmlTags.   TagBuilder TagBuilder is the html builder that is available in ASP.NET MVC 2.0. I'm not a huge fan but it gets the job done -- for simple jobs.  Here's an overview of how to use TagBuilder. See my Tips section below for a few comments on that example. The disadvantage of TagBuilder is that unless you wrap it up with our own classes, you still have to write the actual tag name over and over in your code. eg. new TagBuilder("div") instead of new DivTag(). I also think it's method names are a little too long. Why not have AddClass() instead of AddCssClass() or Text() instead of SetInnerText()? What those methods are doing should be pretty obvious even in the short form. I also don't like that it wants to generate an id attribute from your input instead of letting you set it yourself using external conventions. (See GenerateId() and IdAttributeDotReplacement)). Obviously these come from Microsoft's default approach to MVC but may not be optimal for all programmers.   HtmlTags HtmlTags is in my opinion the much better option for generating html in ASP.NET MVC. It was actually written as a part of FubuMVC but is available as a stand alone library. HtmlTags provides a much cleaner syntax for writing HTML. There are classes for most of the major tags and it's trivial to create additional ones by inheriting from HtmlTag. There are also methods on each tag for the common attributes. For instance, FormTag has an Action() method. The SelectTag class allows you to set the default option or first option independent from adding other options. With TagBuilder there isn't even an abstraction for building selects! The project is open source and always improving. I'll hopefully find time to submit some of my own enhancements soon.   Tips 1) It's best not to have insanely overloaded html helpers. Use fluent builders. 2) In html helpers, return the TagBuilder/tag itself (not a string!) so that you can continue to add attributes outside the helper; see my first sample above. 3) Create a static entry point into your builders. I created a static Tags class that gives me access all the HtmlTag classes I need. This way I don't clutter my code with "new" keywords. eg. Tags.Div returns a new DivTag instance. 4) If you find yourself doing something a lot, create an extension method for it. I created a Nest() extension method that reads much more fluently than the AddChildren() method. It also accepts a params array of tags so I can very easily nest many children.   I hope you have found this post helpful. Join me in my war against HTML literals! I’ll have some more samples of how I use HtmlTags in future posts.

    Read the article

  • Asynchronous Streaming in ASP.NET WebApi

    - by andresv
     Hi everyone, if you use the cool MVC4 WebApi you might encounter yourself in a common situation where you need to return a rather large amount of data (most probably from a database) and you want to accomplish two things: Use streaming so the client fetch the data as needed, and that directly correlates to more fetching in the server side (from our database, for example) without consuming large amounts of memory. Leverage the new MVC4 WebApi and .NET 4.5 async/await asynchronous execution model to free ASP.NET Threadpool threads (if possible).  So, #1 and #2 are not directly related to each other and we could implement our code fulfilling one or the other, or both. The main point about #1 is that we want our method to immediately return to the caller a stream, and that client side stream be represented by a server side stream that gets written (and its related database fetch) only when needed. In this case we would need some form of "state machine" that keeps running in the server and "knows" what is the next thing to fetch into the output stream when the client ask for more content. This technique is generally called a "continuation" and is nothing new in .NET, in fact using an IEnumerable<> interface and the "yield return" keyword does exactly that, so our first impulse might be to write our WebApi method more or less like this:           public IEnumerable<Metadata> Get([FromUri] int accountId)         {             // Execute the command and get a reader             using (var reader = GetMetadataListReader(accountId))             {                 // Read rows asynchronously, put data into buffer and write asynchronously                 while (reader.Read())                 {                     yield return MapRecord(reader);                 }             }         }   While the above method works, unfortunately it doesn't accomplish our objective of returning immediately to the caller, and that's because the MVC WebApi infrastructure doesn't yet recognize our intentions and when it finds an IEnumerable return value, enumerates it before returning to the client its values. To prove my point, I can code a test method that calls this method, for example:        [TestMethod]         public void StreamedDownload()         {             var baseUrl = @"http://localhost:57771/api/metadata/1";             var client = new HttpClient();             var sw = Stopwatch.StartNew();             var stream = client.GetStreamAsync(baseUrl).Result;             sw.Stop();             Debug.WriteLine("Elapsed time Call: {0}ms", sw.ElapsedMilliseconds); } So, I would expect the line "var stream = client.GetStreamAsync(baseUrl).Result" returns immediately without server-side fetching of all data in the database reader, and this didn't happened. To make the behavior more evident, you could insert a wait time (like Thread.Sleep(1000);) inside the "while" loop, and you will see that the client call (GetStreamAsync) is not going to return control after n seconds (being n == number of reader records being fetched).Ok, we know this doesn't work, and the question would be: is there a way to do it?Fortunately, YES!  and is not very difficult although a little more convoluted than our simple IEnumerable return value. Maybe in the future this scenario will be automatically detected and supported in MVC/WebApi.The solution to our needs is to use a very handy class named PushStreamContent and then our method signature needs to change to accommodate this, returning an HttpResponseMessage instead of our previously used IEnumerable<>. The final code will be something like this: public HttpResponseMessage Get([FromUri] int accountId)         {             HttpResponseMessage response = Request.CreateResponse();             // Create push content with a delegate that will get called when it is time to write out              // the response.             response.Content = new PushStreamContent(                 async (outputStream, httpContent, transportContext) =>                 {                     try                     {                         // Execute the command and get a reader                         using (var reader = GetMetadataListReader(accountId))                         {                             // Read rows asynchronously, put data into buffer and write asynchronously                             while (await reader.ReadAsync())                             {                                 var rec = MapRecord(reader);                                 var str = await JsonConvert.SerializeObjectAsync(rec);                                 var buffer = UTF8Encoding.UTF8.GetBytes(str);                                 // Write out data to output stream                                 await outputStream.WriteAsync(buffer, 0, buffer.Length);                             }                         }                     }                     catch(HttpException ex)                     {                         if (ex.ErrorCode == -2147023667) // The remote host closed the connection.                          {                             return;                         }                     }                     finally                     {                         // Close output stream as we are done                         outputStream.Close();                     }                 });             return response;         } As an extra bonus, all involved classes used already support async/await asynchronous execution model, so taking advantage of that was very easy. Please note that the PushStreamContent class receives in its constructor a lambda (specifically an Action) and we decorated our anonymous method with the async keyword (not a very well known technique but quite handy) so we can await over the I/O intensive calls we execute like reading from the database reader, serializing our entity and finally writing to the output stream.  Well, if we execute the test again we will immediately notice that the a line returns immediately and then the rest of the server code is executed only when the client reads through the obtained stream, therefore we get low memory usage and far greater scalability for our beloved application serving big chunks of data.Enjoy!Andrés.        

    Read the article

  • Extreme Optimization – Numerical Algorithm Support

    - by JoshReuben
    Function Delegates Many calculations involve the repeated evaluation of one or more user-supplied functions eg Numerical integration. The EO MathLib provides delegate types for common function signatures and the FunctionFactory class can generate new delegates from existing ones. RealFunction delegate - takes one Double parameter – can encapsulate most of the static methods of the System.Math class, as well as the classes in the Extreme.Mathematics.SpecialFunctions namespace: var sin = new RealFunction(Math.Sin); var result = sin(1); BivariateRealFunction delegate - takes two Double parameters: var atan2 = new BivariateRealFunction (Math.Atan2); var result = atan2(1, 2); TrivariateRealFunction delegate – represents a function takes three Double arguments ParameterizedRealFunction delegate - represents a function taking one Integer and one Double argument that returns a real number. The Pow method implements such a function, but the arguments need order re-arrangement: static double Power(int exponent, double x) { return ElementaryFunctions.Pow(x, exponent); } ... var power = new ParameterizedRealFunction(Power); var result = power(6, 3.2); A ComplexFunction delegate - represents a function that takes an Extreme.Mathematics.DoubleComplex argument and also returns a complex number. MultivariateRealFunction delegate - represents a function that takes an Extreme.Mathematics.LinearAlgebra.Vector argument and returns a real number. MultivariateVectorFunction delegate - represents a function that takes a Vector argument and returns a Vector. FastMultivariateVectorFunction delegate - represents a function that takes an input Vector argument and an output Matrix argument – avoiding object construction  The FunctionFactory class RealFromBivariateRealFunction and RealFromParameterizedRealFunction helper methods - transform BivariateRealFunction or a ParameterizedRealFunction into a RealFunction delegate by fixing one of the arguments, and treating this as a new function of a single argument. var tenthPower = FunctionFactory.RealFromParameterizedRealFunction(power, 10); var result = tenthPower(x); Note: There is no direct way to do this programmatically in C# - in F# you have partial value functions where you supply a subset of the arguments (as a travelling closure) that the function expects. When you omit arguments, F# generates a new function that holds onto/remembers the arguments you passed in and "waits" for the other parameters to be supplied. let sumVals x y = x + y     let sumX = sumVals 10     // Note: no 2nd param supplied.     // sumX is a new function generated from partially applied sumVals.     // ie "sumX is a partial application of sumVals." let sum = sumX 20     // Invokes sumX, passing in expected int (parameter y from original)  val sumVals : int -> int -> int val sumX : (int -> int) val sum : int = 30 RealFunctionsToVectorFunction and RealFunctionsToFastVectorFunction helper methods - combines an array of delegates returning a real number or a vector into vector or matrix functions. The resulting vector function returns a vector whose components are the function values of the delegates in the array. var funcVector = FunctionFactory.RealFunctionsToVectorFunction(     new MultivariateRealFunction(myFunc1),     new MultivariateRealFunction(myFunc2));  The IterativeAlgorithm<T> abstract base class Iterative algorithms are common in numerical computing - a method is executed repeatedly until a certain condition is reached, approximating the result of a calculation with increasing accuracy until a certain threshold is reached. If the desired accuracy is achieved, the algorithm is said to converge. This base class is derived by many classes in the Extreme.Mathematics.EquationSolvers and Extreme.Mathematics.Optimization namespaces, as well as the ManagedIterativeAlgorithm class which contains a driver method that manages the iteration process.  The ConvergenceTest abstract base class This class is used to specify algorithm Termination , convergence and results - calculates an estimate for the error, and signals termination of the algorithm when the error is below a specified tolerance. Termination Criteria - specify the success condition as the difference between some quantity and its actual value is within a certain tolerance – 2 ways: absolute error - difference between the result and the actual value. relative error is the difference between the result and the actual value relative to the size of the result. Tolerance property - specify trade-off between accuracy and execution time. The lower the tolerance, the longer it will take for the algorithm to obtain a result within that tolerance. Most algorithms in the EO NumLib have a default value of MachineConstants.SqrtEpsilon - gives slightly less than 8 digits of accuracy. ConvergenceCriterion property - specify under what condition the algorithm is assumed to converge. Using the ConvergenceCriterion enum: WithinAbsoluteTolerance / WithinRelativeTolerance / WithinAnyTolerance / NumberOfIterations Active property - selectively ignore certain convergence tests Error property - returns the estimated error after a run MaxIterations / MaxEvaluations properties - Other Termination Criteria - If the algorithm cannot achieve the desired accuracy, the algorithm still has to end – according to an absolute boundary. Status property - indicates how the algorithm terminated - the AlgorithmStatus enum values:NoResult / Busy / Converged (ended normally - The desired accuracy has been achieved) / IterationLimitExceeded / EvaluationLimitExceeded / RoundOffError / BadFunction / Divergent / ConvergedToFalseSolution. After the iteration terminates, the Status should be inspected to verify that the algorithm terminated normally. Alternatively, you can set the ThrowExceptionOnFailure to true. Result property - returns the result of the algorithm. This property contains the best available estimate, even if the desired accuracy was not obtained. IterationsNeeded / EvaluationsNeeded properties - returns the number of iterations required to obtain the result, number of function evaluations.  Concrete Types of Convergence Test classes SimpleConvergenceTest class - test if a value is close to zero or very small compared to another value. VectorConvergenceTest class - test convergence of vectors. This class has two additional properties. The Norm property specifies which norm is to be used when calculating the size of the vector - the VectorConvergenceNorm enum values: EuclidianNorm / Maximum / SumOfAbsoluteValues. The ErrorMeasure property specifies how the error is to be measured – VectorConvergenceErrorMeasure enum values: Norm / Componentwise ConvergenceTestCollection class - represent a combination of tests. The Quantifier property is a ConvergenceTestQuantifier enum that specifies how the tests in the collection are to be combined: Any / All  The AlgorithmHelper Class inherits from IterativeAlgorithm<T> and exposes two methods for convergence testing. IsValueWithinTolerance<T> method - determines whether a value is close to another value to within an algorithm's requested tolerance. IsIntervalWithinTolerance<T> method - determines whether an interval is within an algorithm's requested tolerance.

    Read the article

  • Goto for the Java Programming Language

    - by darcy
    Work on JDK 8 is well-underway, but we thought this late-breaking JEP for another language change for the platform couldn't wait another day before being published. Title: Goto for the Java Programming Language Author: Joseph D. Darcy Organization: Oracle. Created: 2012/04/01 Type: Feature State: Funded Exposure: Open Component: core/lang Scope: SE JSR: 901 MR Discussion: compiler dash dev at openjdk dot java dot net Start: 2012/Q2 Effort: XS Duration: S Template: 1.0 Reviewed-by: Duke Endorsed-by: Edsger Dijkstra Funded-by: Blue Sun Corporation Summary Provide the benefits of the time-testing goto control structure to Java programs. The Java language has a history of adding new control structures over time, the assert statement in 1.4, the enhanced for-loop in 1.5,and try-with-resources in 7. Having support for goto is long-overdue and simple to implement since the JVM already has goto instructions. Success Metrics The goto statement will allow inefficient and verbose recursive algorithms and explicit loops to be replaced with more compact code. The effort will be a success if at least twenty five percent of the JDK's explicit loops are replaced with goto's. Coordination with IDE vendors is expected to help facilitate this goal. Motivation The goto construct offers numerous benefits to the Java platform, from increased expressiveness, to more compact code, to providing new programming paradigms to appeal to a broader demographic. In JDK 8, there is a renewed focus on using the Java platform on embedded devices with more modest resources than desktop or server environments. In such contexts, static and dynamic memory footprint is a concern. One significant component of footprint is the code attribute of class files and certain classes of important algorithms can be expressed more compactly using goto than using other constructs, saving footprint. For example, to implement state machines recursively, some parties have asked for the JVM to support tail calls, that is, to perform a complex transformation with security implications to turn a method call into a goto. Such complicated machinery should not be assumed for an embedded context. A better solution is just to expose to the programmer the desired functionality, goto. The web has familiarized users with a model of traversing links among different HTML pages in a free-form fashion with some state being maintained on the side, such as login credentials, to effect behavior. This is exactly the programming model of goto and code. While in the past this has been derided as leading to "spaghetti code," spaghetti is a tasty and nutritious meal for programmers, unlike quiche. The invokedynamic instruction added by JSR 292 exposes the JVM's linkage operation to programmers. This is a low-level operation that can be leveraged by sophisticated programmers. Likewise, goto is a also a low-level operation that should not be hidden from programmers who can use more efficient idioms. Some may object that goto was consciously excluded from the original design of Java as one of the removed feature from C and C++. However, the designers of the Java programming languages have revisited these removals before. The enum construct was also left out only to be added in JDK 5 and multiple inheritance was left out, only to be added back by the virtual extension method methods of Project Lambda. As a living language, the needs of the growing Java community today should be used to judge what features are needed in the platform tomorrow; the language should not be forever bound by the decisions of the past. Description From its initial version, the JVM has had two instructions for unconditional transfer of control within a method, goto (0xa7) and goto_w (0xc8). The goto_w instruction is used for larger jumps. All versions of the Java language have supported labeled statements; however, only the break and continue statements were able to specify a particular label as a target with the onerous restriction that the label must be lexically enclosing. The grammar addition for the goto statement is: GotoStatement: goto Identifier ; The new goto statement similar to break except that the target label can be anywhere inside the method and the identifier is mandatory. The compiler simply translates the goto statement into one of the JVM goto instructions targeting the right offset in the method. Therefore, adding the goto statement to the platform is only a small effort since existing compiler and JVM functionality is reused. Other language changes to support goto include obvious updates to definite assignment analysis, reachability analysis, and exception analysis. Possible future extensions include a computed goto as found in gcc, which would replace the identifier in the goto statement with an expression having the type of a label. Testing Since goto will be implemented using largely existing facilities, only light levels of testing are needed. Impact Compatibility: Since goto is already a keyword, there are no source compatibility implications. Performance/scalability: Performance will improve with more compact code. JVMs already need to handle irreducible flow graphs since goto is a VM instruction.

    Read the article

  • CodePlex Daily Summary for Wednesday, July 31, 2013

    CodePlex Daily Summary for Wednesday, July 31, 2013Popular ReleasesSharePoint 2010 Export User Information to Text file, SQL server: Full Source code of the project: Please change SharePoint server address. I have used my sharepoint server -> http://sneakpreviewWINJS CTK (Control Toolkit): Initial Release: Initial release supports the following. Expander NumericBoxXMLPreprocess: 2.0.18: What's new in this release For XML Spreadsheet 2003 format, used the frozen row at the top of the worksheet to indicate the beginning of the values. This prevents you from having to start your values at row 7. This can be overridden with the /firstValueRow (or /vr) argument. Issues Fixed Fix for Issue 13006 : Corrected default treatment of the value "false" Beginning with this version, the FixFalse behavior that has caused confusion to so many, has hopefully been addressed in a way that st...MVVM.EventToCommand: Tommy.MVVM.EventToCommand: Tommy.MVVM.EventToCommand is a free, open source developer focused event2command via WinRT for MVVM Pattern.MVC Generator: MVC Generator Visual Studio Addin: This is the latest build, this includes the MVCGenerator.dll, and Visual Studio Addin file. See the home page of this project for installation instructions.Project Nonnon: 2013_07_30: ----------==========----------==========----------==========---------- "No news is good news." ----------==========----------==========----------==========---------- Change Log 2013/07/30 BUGFIX game/sound/directsound.c n_directsound_loop() OLD : crash when DirectSound is not supported NEW : fixed game/sound/waveout.c when included OLD : compile error NEW : fixed game/chara.c when included OLD : compile error NEW : fixed game/direct2d.c when included ...Dynamics CRM 2011 EasyPlugins: EasyPlugins-1.2.3.1-managed: V1.2.3.1 - Bug Fix : Condition expression is not saved on action creation - Bug Fix : Request Param with single attribute result - Some style changes v1.2.3.0 - Bug Fix : Twice plugins execution. - New Abort action - Depth Plugin Execution management on each action - Turn On/Off EasyPlugins feature (can be useful in some cases of imports) v1.2.0.0 Associate / Disassociate actions are now available Import / Export features Better management of Lookups Trigger NamingXmlObjectMapper: XOM Alpha 1.0: first Alpha Version of XOM: read and write xml nodes and attributes mapping to IList or Interfaces simple property attribute mapping creates new xml nodes at run time IList changes (add, remove) implemented in the next releasenopCommerce. Open source shopping cart (ASP.NET MVC): nopCommerce 3.10: Highlight features & improvements: • Performance optimization. • New more user-friendly product/product-variant logic. Now we'll have only products (simple and grouped). • Bundle products support added. • Allow a store owner to associate product image for product variant attribute values. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).ExtJS based ASP.NET Controls: FineUI v3.3.1: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI ???? ExtJS ????????,???? ExtJS ?,???????????ExtJS?: 1. ????? FineUI ? ExtJS ?:http://fineui.com/bbs/fo...AutoNLayered - Domain Oriented N-Layered .NET 4.5: AutoNLayered v1.0.5: - Fix Dtos. Abstract collections replaced by concrete (correct serialization WCF). - OrderBy in navigation properties. - Unit Test with Fakes. - Map of entities/dto moved to application services. - Libraries updated. Warning using Fakes: http://connect.microsoft.com/VisualStudio/feedback/details/782031/visual-studio-2012-add-fakes-assembly-does-not-add-all-needed-referencesPath Copy Copy: 11.1: Minor release with two new features: Submenu's contextual menu item now has an icon next to it Added reference to JavaScript regular expression format in Settings application Since this release does not have any glaring bug fixes, it is more of an optional update for existing users. It depends on whether you want to be able to spot the Path Copy Copy submenu more easily. I recommend you install it to see if the icon makes sense. As always, please don't hesitate to leave feedback via Discus...CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.0 RC3: This is the third release candidate of CMake Tools for Visual Studio 1.0, which contains the following bug fixes: Opening a CMake file from Windows Explorer while Visual Studio is already open will no start a new instance of Visual Studio. Typing a symbol while the IntelliSense list box is visible and the text typed so far does not match any item in the list will dismiss the list box and insert the symbol typed.R.NET: R.NET 1.5: The major changes in v1.5 are: Initialize method must be called before using R. Settings should be passed to the method. EagerEvaluate method renamed to Evaluate (use Defer method when you want old version of Evaluate).Media Companion: Media Companion MC3.574b: Some good bug fixes been going on with the new XBMC-Link function. Thanks to all who were able to do testing and gave feedback. New:* Added some adhoc extra General movie filters, one of which is Plot = Outline (see fixes above). To see the filters, add the following line to your config.xml: <ShowExtraMovieFilters>True</ShowExtraMovieFilters>. The others are: Imdb in folder name, Imdb in not folder name & Imdb not in folder name & year mismatch. * Movie - display <tag> list on browser tab ...OfflineBrowser: Preview Release with Search: I've added search to this release.VG-Ripper & PG-Ripper: VG-Ripper 2.9.46: changes FIXED LoginMath.NET Numerics: Math.NET Numerics v2.6.0: What's New in Math.NET Numerics 2.6 - Announcement, Explanations and Sample Code. New: Linear Curve Fitting Linear least-squares fitting (regression) to lines, polynomials and linear combinations of arbitrary functions. Multi-dimensional fitting. Also works well in F# with the F# extensions. New: Root Finding Brent's method. ~Candy Chiu, Alexander Täschner Bisection method. ~Scott Stephens, Alexander Täschner Broyden's method, for multi-dimensional functions. ~Alexander Täschner ...AJAX Control Toolkit: July 2013 Release: AJAX Control Toolkit Release Notes - July 2013 Release Version 7.0725July 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...MJP's DirectX 11 Samples: Specular Antialiasing Sample: Sample code to complement my presentation that's part of the Physically Based Shading in Theory and Practice course at SIGGRAPH 2013, entitled "Crafting a Next-Gen Material Pipeline for The Order: 1886". Demonstrates various methods of preventing aliasing from specular BRDF's when using high-frequency normal maps. The zip file contains source code as well as a pre-compiled x64 binary.New ProjectsA Simple Java Encryption program: This is my very first Java project. It's a file encryptor. It's purpose is to codify a file to make it look like it contain random information.AA??: aa??,????。Browser Chooser 2: Browser Chooser 2 is an updated fork of the original. It's primary goal is to simplify using multiple browser.campuscloud-mobile: Dieses Projekt enthält die mobilen Apps zur Campuscloud App.campuscloud-owncloudplugins: Dieses Projekt enthält die ownCloud-Plugins zu den CampuCloud-Apps, die ownCloud-Server um weitere Funktionalitäten ergänzen.campuscloud-windowsphone8: Diese Projekt enthält die Windows Phone 8 App zur Campuscloud App.Chronos .Net Performance Profiler: Free .Net Performance ProfilerConvertServer: a pet project i works onDevelopment Tools for Solid Edge: Development tools for Solid Edge.DocAssist: The project is to facilitate user's day-to-day management of her files in a customisable way on Windows System equipped with .NET framework.epe: epeETP 3: prueba de asp mvc para desarrollar conjuntamente en un repositorio tfsHello World in MVC4: Application to testInvoke-MsTest PowerShell Module: A PowerShell module that makes unit testing Visual Studio projects fast and easy. Uses MsTest.exe to launch all test associated with your project or solution.MarathonTP: MarathonTP (TP stands for Transport Protocol) is a lightweight communication protocol specificaly developped for machine to machine communication.Multicopter Simulator: A simulation environment for multicopter built with Microsoft Robotics Developer Studio 4.MVC Rags: A bunch of ASP.NET MVC4 helpersMVVM.EventToCommand: Tommy.MVVM.EventToCommand is a free, open source developer focused event2command via WinRT for MVVM Pattern.RNT.Common: rnt.commonStorageOrizer: Software to manage disk space, e.g. move Programs without damaging function. Subset Sum Problem Solver: SubsetThirdPartyLogin: ?????twitterBootstrapAspNetMVCControls: Asp.net MVC Controls based on twiiter Boostrap( a beautiful html & css framework published by Twitter).Unify: Unify is an automatic IoC Container Configurator, based on layers approach via annotations.V32 Assembler: The assembler for my assembly language for my 32-bit virtual machine with a home-made instruction set.

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >