Search Results

Search found 6342 results on 254 pages for 'behavior'.

Page 102/254 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • What's New in ASP.NET 4

    - by Navaneeth
    The .NET Framework version 4 includes enhancements for ASP.NET 4 in targeted areas. Visual Studio 2010 and Microsoft Visual Web Developer Express also include enhancements and new features for improved Web development. This document provides an overview of many of the new features that are included in the upcoming release. This topic contains the following sections: ASP.NET Core Services ASP.NET Web Forms ASP.NET MVC Dynamic Data ASP.NET Chart Control Visual Web Developer Enhancements Web Application Deployment with Visual Studio 2010 Enhancements to ASP.NET Multi-Targeting ASP.NET Core Services ASP.NET 4 introduces many features that improve core ASP.NET services such as output caching and session state storage. Extensible Output Caching Since the time that ASP.NET 1.0 was released, output caching has enabled developers to store the generated output of pages, controls, and HTTP responses in memory. On subsequent Web requests, ASP.NET can serve content more quickly by retrieving the generated output from memory instead of regenerating the output from scratch. However, this approach has a limitation — generated content always has to be stored in memory. On servers that experience heavy traffic, the memory requirements for output caching can compete with memory requirements for other parts of a Web application. ASP.NET 4 adds extensibility to output caching that enables you to configure one or more custom output-cache providers. Output-cache providers can use any storage mechanism to persist HTML content. These storage options can include local or remote disks, cloud storage, and distributed cache engines. Output-cache provider extensibility in ASP.NET 4 lets you design more aggressive and more intelligent output-caching strategies for Web sites. For example, you can create an output-cache provider that caches the "Top 10" pages of a site in memory, while caching pages that get lower traffic on disk. Alternatively, you can cache every vary-by combination for a rendered page, but use a distributed cache so that the memory consumption is offloaded from front-end Web servers. You create a custom output-cache provider as a class that derives from the OutputCacheProvider type. You can then configure the provider in the Web.config file by using the new providers subsection of the outputCache element For more information and for examples that show how to configure the output cache, see outputCache Element for caching (ASP.NET Settings Schema). For more information about the classes that support caching, see the documentation for the OutputCache and OutputCacheProvider classes. By default, in ASP.NET 4, all HTTP responses, rendered pages, and controls use the in-memory output cache. The defaultProvider attribute for ASP.NET is AspNetInternalProvider. You can change the default output-cache provider used for a Web application by specifying a different provider name for defaultProvider attribute. In addition, you can select different output-cache providers for individual control and for individual requests and programmatically specify which provider to use. For more information, see the HttpApplication.GetOutputCacheProviderName(HttpContext) method. The easiest way to choose a different output-cache provider for different Web user controls is to do so declaratively by using the new providerName attribute in a page or control directive, as shown in the following example: <%@ OutputCache Duration="60" VaryByParam="None" providerName="DiskCache" %> Preloading Web Applications Some Web applications must load large amounts of data or must perform expensive initialization processing before serving the first request. In earlier versions of ASP.NET, for these situations you had to devise custom approaches to "wake up" an ASP.NET application and then run initialization code during the Application_Load method in the Global.asax file. To address this scenario, a new application preload manager (autostart feature) is available when ASP.NET 4 runs on IIS 7.5 on Windows Server 2008 R2. The preload feature provides a controlled approach for starting up an application pool, initializing an ASP.NET application, and then accepting HTTP requests. It lets you perform expensive application initialization prior to processing the first HTTP request. For example, you can use the application preload manager to initialize an application and then signal a load-balancer that the application was initialized and ready to accept HTTP traffic. To use the application preload manager, an IIS administrator sets an application pool in IIS 7.5 to be automatically started by using the following configuration in the applicationHost.config file: <applicationPools> <add name="MyApplicationPool" startMode="AlwaysRunning" /> </applicationPools> Because a single application pool can contain multiple applications, you specify individual applications to be automatically started by using the following configuration in the applicationHost.config file: <sites> <site name="MySite" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PrewarmMyCache" > <!-- Additional content --> </application> </site> </sites> <!-- Additional content --> <serviceAutoStartProviders> <add name="PrewarmMyCache" type="MyNamespace.CustomInitialization, MyLibrary" /> </serviceAutoStartProviders> When an IIS 7.5 server is cold-started or when an individual application pool is recycled, IIS 7.5 uses the information in the applicationHost.config file to determine which Web applications have to be automatically started. For each application that is marked for preload, IIS7.5 sends a request to ASP.NET 4 to start the application in a state during which the application temporarily does not accept HTTP requests. When it is in this state, ASP.NET instantiates the type defined by the serviceAutoStartProvider attribute (as shown in the previous example) and calls into its public entry point. You create a managed preload type that has the required entry point by implementing the IProcessHostPreloadClient interface, as shown in the following example: public class CustomInitialization : System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // Perform initialization. } } After your initialization code runs in the Preload method and after the method returns, the ASP.NET application is ready to process requests. Permanently Redirecting a Page Content in Web applications is often moved over the lifetime of the application. This can lead to links to be out of date, such as the links that are returned by search engines. In ASP.NET, developers have traditionally handled requests to old URLs by using the Redirect method to forward a request to the new URL. However, the Redirect method issues an HTTP 302 (Found) response (which is used for a temporary redirect). This results in an extra HTTP round trip. ASP.NET 4 adds a RedirectPermanent helper method that makes it easy to issue HTTP 301 (Moved Permanently) responses, as in the following example: RedirectPermanent("/newpath/foroldcontent.aspx"); Search engines and other user agents that recognize permanent redirects will store the new URL that is associated with the content, which eliminates the unnecessary round trip made by the browser for temporary redirects. Session State Compression By default, ASP.NET provides two options for storing session state across a Web farm. The first option is a session state provider that invokes an out-of-process session state server. The second option is a session state provider that stores data in a Microsoft SQL Server database. Because both options store state information outside a Web application's worker process, session state has to be serialized before it is sent to remote storage. If a large amount of data is saved in session state, the size of the serialized data can become very large. ASP.NET 4 introduces a new compression option for both kinds of out-of-process session state providers. By using this option, applications that have spare CPU cycles on Web servers can achieve substantial reductions in the size of serialized session state data. You can set this option using the new compressionEnabled attribute of the sessionState element in the configuration file. When the compressionEnabled configuration option is set to true, ASP.NET compresses (and decompresses) serialized session state by using the .NET Framework GZipStreamclass. The following example shows how to set this attribute. <sessionState mode="SqlServer" sqlConnectionString="data source=dbserver;Initial Catalog=aspnetstate" allowCustomSqlDatabase="true" compressionEnabled="true" /> ASP.NET Web Forms Web Forms has been a core feature in ASP.NET since the release of ASP.NET 1.0. Many enhancements have been in this area for ASP.NET 4, such as the following: The ability to set meta tags. More control over view state. Support for recently introduced browsers and devices. Easier ways to work with browser capabilities. Support for using ASP.NET routing with Web Forms. More control over generated IDs. The ability to persist selected rows in data controls. More control over rendered HTML in the FormView and ListView controls. Filtering support for data source controls. Enhanced support for Web standards and accessibility Setting Meta Tags with the Page.MetaKeywords and Page.MetaDescription Properties Two properties have been added to the Page class: MetaKeywords and MetaDescription. These two properties represent corresponding meta tags in the HTML rendered for a page, as shown in the following example: <head id="Head1" runat="server"> <title>Untitled Page</title> <meta name="keywords" content="keyword1, keyword2' /> <meta name="description" content="Description of my page" /> </head> These two properties work like the Title property does, and they can be set in the @ Page directive. For more information, see Page.MetaKeywords and Page.MetaDescription. Enabling View State for Individual Controls A new property has been added to the Control class: ViewStateMode. You can use this property to disable view state for all controls on a page except those for which you explicitly enable view state. View state data is included in a page's HTML and increases the amount of time it takes to send a page to the client and post it back. Storing more view state than is necessary can cause significant decrease in performance. In earlier versions of ASP.NET, you could reduce the impact of view state on a page's performance by disabling view state for specific controls. But sometimes it is easier to enable view state for a few controls that need it instead of disabling it for many that do not need it. For more information, see Control.ViewStateMode. Support for Recently Introduced Browsers and Devices ASP.NET includes a feature that is named browser capabilities that lets you determine the capabilities of the browser that a user is using. Browser capabilities are represented by the HttpBrowserCapabilities object which is stored in the HttpRequest.Browser property. Information about a particular browser's capabilities is defined by a browser definition file. In ASP.NET 4, these browser definition files have been updated to contain information about recently introduced browsers and devices such as Google Chrome, Research in Motion BlackBerry smart phones, and Apple iPhone. Existing browser definition files have also been updated. For more information, see How to: Upgrade an ASP.NET Web Application to ASP.NET 4 and ASP.NET Web Server Controls and Browser Capabilities. The browser definition files that are included with ASP.NET 4 are shown in the following list: •blackberry.browser •chrome.browser •Default.browser •firefox.browser •gateway.browser •generic.browser •ie.browser •iemobile.browser •iphone.browser •opera.browser •safari.browser A New Way to Define Browser Capabilities ASP.NET 4 includes a new feature referred to as browser capabilities providers. As the name suggests, this lets you build a provider that in turn lets you write custom code to determine browser capabilities. In ASP.NET version 3.5 Service Pack 1, you define browser capabilities in an XML file. This file resides in a machine-level folder or an application-level folder. Most developers do not need to customize these files, but for those who do, the provider approach can be easier than dealing with complex XML syntax. The provider approach makes it possible to simplify the process by implementing a common browser definition syntax, or a database that contains up-to-date browser definitions, or even a Web service for such a database. For more information about the new browser capabilities provider, see the What's New for ASP.NET 4 White Paper. Routing in ASP.NET 4 ASP.NET 4 adds built-in support for routing with Web Forms. Routing is a feature that was introduced with ASP.NET 3.5 SP1 and lets you configure an application to use URLs that are meaningful to users and to search engines because they do not have to specify physical file names. This can make your site more user-friendly and your site content more discoverable by search engines. For example, the URL for a page that displays product categories in your application might look like the following example: http://website/products.aspx?categoryid=12 By using routing, you can use the following URL to render the same information: http://website/products/software The second URL lets the user know what to expect and can result in significantly improved rankings in search engine results. the new features include the following: The PageRouteHandler class is a simple HTTP handler that you use when you define routes. You no longer have to write a custom route handler. The HttpRequest.RequestContext and Page.RouteData properties make it easier to access information that is passed in URL parameters. The RouteUrl expression provides a simple way to create a routed URL in markup. The RouteValue expression provides a simple way to extract URL parameter values in markup. The RouteParameter class makes it easier to pass URL parameter values to a query for a data source control (similar to FormParameter). You no longer have to change the Web.config file to enable routing. For more information about routing, see the following topics: ASP.NET Routing Walkthrough: Using ASP.NET Routing in a Web Forms Application How to: Define Routes for Web Forms Applications How to: Construct URLs from Routes How to: Access URL Parameters in a Routed Page Setting Client IDs The new ClientIDMode property makes it easier to write client script that references HTML elements rendered for server controls. Increasing use of Microsoft Ajax makes the need to do this more common. For example, you may have a data control that renders a long list of products with prices and you want to use client script to make a Web service call and update individual prices in the list as they change without refreshing the entire page. Typically you get a reference to an HTML element in client script by using the document.GetElementById method. You pass to this method the value of the id attribute of the HTML element you want to reference. In the case of elements that are rendered for ASP.NET server controls earlier versions of ASP.NET could make this difficult or impossible. You were not always able to predict what id values ASP.NET would generate, or ASP.NET could generate very long id values. The problem was especially difficult for data controls that would generate multiple rows for a single instance of the control in your markup. ASP.NET 4 adds two new algorithms for generating id attributes. These algorithms can generate id attributes that are easier to work with in client script because they are more predictable and that are easier to work with because they are simpler. For more information about how to use the new algorithms, see the following topics: ASP.NET Web Server Control Identification Walkthrough: Making Data-Bound Controls Easier to Access from JavaScript Walkthrough: Making Controls Located in Web User Controls Easier to Access from JavaScript How to: Access Controls from JavaScript by ID Persisting Row Selection in Data Controls The GridView and ListView controls enable users to select a row. In previous versions of ASP.NET, row selection was based on the row index on the page. For example, if you select the third item on page 1 and then move to page 2, the third item on page 2 is selected. In most cases, is more desirable not to select any rows on page 2. ASP.NET 4 supports Persisted Selection, a new feature that was initially supported only in Dynamic Data projects in the .NET Framework 3.5 SP1. When this feature is enabled, the selected item is based on the row data key. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. This is a much more natural behavior than the behavior in earlier versions of ASP.NET. Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature in the GridView control, for example, by setting the EnablePersistedSelection property, as shown in the following example: <asp:GridView id="GridView2" runat="server" PersistedSelection="true"> </asp:GridView> FormView Control Enhancements The FormView control is enhanced to make it easier to style the content of the control with CSS. In previous versions of ASP.NET, the FormView control rendered it contents using an item template. This made styling more difficult in the markup because unexpected table row and table cell tags were rendered by the control. The FormView control supports RenderOuterTable, a property in ASP.NET 4. When this property is set to false, as show in the following example, the table tags are not rendered. This makes it easier to apply CSS style to the contents of the control. <asp:FormView ID="FormView1" runat="server" RenderTable="false"> For more information, see FormView Web Server Control Overview. ListView Control Enhancements The ListView control, which was introduced in ASP.NET 3.5, has all the functionality of the GridView control while giving you complete control over the output. This control has been made easier to use in ASP.NET 4. The earlier version of the control required that you specify a layout template that contained a server control with a known ID. The following markup shows a typical example of how to use the ListView control in ASP.NET 3.5. <asp:ListView ID="ListView1" runat="server"> <LayoutTemplate> <asp:PlaceHolder ID="ItemPlaceHolder" runat="server"></asp:PlaceHolder> </LayoutTemplate> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> In ASP.NET 4, the ListView control does not require a layout template. The markup shown in the previous example can be replaced with the following markup: <asp:ListView ID="ListView1" runat="server"> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> For more information, see ListView Web Server Control Overview. Filtering Data with the QueryExtender Control A very common task for developers who create data-driven Web pages is to filter data. This traditionally has been performed by building Where clauses in data source controls. This approach can be complicated, and in some cases the Where syntax does not let you take advantage of the full functionality of the underlying database. To make filtering easier, a new QueryExtender control has been added in ASP.NET 4. This control can be added to EntityDataSource or LinqDataSource controls in order to filter the data returned by these controls. Because the QueryExtender control relies on LINQ, but you do not to need to know how to write LINQ queries to use the query extender. The QueryExtender control supports a variety of filter options. The following lists QueryExtender filter options. Term Definition SearchExpression Searches a field or fields for string values and compares them to a specified string value. RangeExpression Searches a field or fields for values in a range specified by a pair of values. PropertyExpression Compares a specified value to a property value in a field. If the expression evaluates to true, the data that is being examined is returned. OrderByExpression Sorts data by a specified column and sort direction. CustomExpression Calls a function that defines custom filter in the page. For more information, see QueryExtenderQueryExtender Web Server Control Overview. Enhanced Support for Web Standards and Accessibility Earlier versions of ASP.NET controls sometimes render markup that does not conform to HTML, XHTML, or accessibility standards. ASP.NET 4 eliminates most of these exceptions. For details about how the HTML that is rendered by each control meets accessibility standards, see ASP.NET Controls and Accessibility. CSS for Controls that Can be Disabled In ASP.NET 3.5, when a control is disabled (see WebControl.Enabled), a disabled attribute is added to the rendered HTML element. For example, the following markup creates a Label control that is disabled: <asp:Label id="Label1" runat="server"   Text="Test" Enabled="false" /> In ASP.NET 3.5, the previous control settings generate the following HTML: <span id="Label1" disabled="disabled">Test</span> In HTML 4.01, the disabled attribute is not considered valid on span elements. It is valid only on input elements because it specifies that they cannot be accessed. On display-only elements such as span elements, browsers typically support rendering for a disabled appearance, but a Web page that relies on this non-standard behavior is not robust according to accessibility standards. For display-only elements, you should use CSS to indicate a disabled visual appearance. Therefore, by default ASP.NET 4 generates the following HTML for the control settings shown previously: <span id="Label1" class="aspNetDisabled">Test</span> You can change the value of the class attribute that is rendered by default when a control is disabled by setting the DisabledCssClass property. CSS for Validation Controls In ASP.NET 3.5, validation controls render a default color of red as an inline style. For example, the following markup creates a RequiredFieldValidator control: <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server"   ErrorMessage="Required Field" ControlToValidate="RadioButtonList1" /> ASP.NET 3.5 renders the following HTML for the validator control: <span id="RequiredFieldValidator1"   style="color:Red;visibility:hidden;">RequiredFieldValidator</span> By default, ASP.NET 4 does not render an inline style to set the color to red. An inline style is used only to hide or show the validator, as shown in the following example: <span id="RequiredFieldValidator1"   style"visibility:hidden;">RequiredFieldValidator</span> Therefore, ASP.NET 4 does not automatically show error messages in red. For information about how to use CSS to specify a visual style for a validation control, see Validating User Input in ASP.NET Web Pages. CSS for the Hidden Fields Div Element ASP.NET uses hidden fields to store state information such as view state and control state. These hidden fields are contained by a div element. In ASP.NET 3.5, this div element does not have a class attribute or an id attribute. Therefore, CSS rules that affect all div elements could unintentionally cause this div to be visible. To avoid this problem, ASP.NET 4 renders the div element for hidden fields with a CSS class that you can use to differentiate the hidden fields div from others. The new classvalue is shown in the following example: <div class="aspNetHidden"> CSS for the Table, Image, and ImageButton Controls By default, in ASP.NET 3.5, some controls set the border attribute of rendered HTML to zero (0). The following example shows HTML that is generated by the Table control in ASP.NET 3.5: <table id="Table2" border="0"> The Image control and the ImageButton control also do this. Because this is not necessary and provides visual formatting information that should be provided by using CSS, the attribute is not generated in ASP.NET 4. CSS for the UpdatePanel and UpdateProgress Controls In ASP.NET 3.5, the UpdatePanel and UpdateProgress controls do not support expando attributes. This makes it impossible to set a CSS class on the HTMLelements that they render. In ASP.NET 4 these controls have been changed to accept expando attributes, as shown in the following example: <asp:UpdatePanel runat="server" class="myStyle"> </asp:UpdatePanel> The following HTML is rendered for this markup: <div id="ctl00_MainContent_UpdatePanel1" class="expandoclass"> </div> Eliminating Unnecessary Outer Tables In ASP.NET 3.5, the HTML that is rendered for the following controls is wrapped in a table element whose purpose is to apply inline styles to the entire control: FormView Login PasswordRecovery ChangePassword If you use templates to customize the appearance of these controls, you can specify CSS styles in the markup that you provide in the templates. In that case, no extra outer table is required. In ASP.NET 4, you can prevent the table from being rendered by setting the new RenderOuterTable property to false. Layout Templates for Wizard Controls In ASP.NET 3.5, the Wizard and CreateUserWizard controls generate an HTML table element that is used for visual formatting. In ASP.NET 4 you can use a LayoutTemplate element to specify the layout. If you do this, the HTML table element is not generated. In the template, you create placeholder controls to indicate where items should be dynamically inserted into the control. (This is similar to how the template model for the ListView control works.) For more information, see the Wizard.LayoutTemplate property. New HTML Formatting Options for the CheckBoxList and RadioButtonList Controls ASP.NET 3.5 uses HTML table elements to format the output for the CheckBoxList and RadioButtonList controls. To provide an alternative that does not use tables for visual formatting, ASP.NET 4 adds two new options to the RepeatLayout enumeration: UnorderedList. This option causes the HTML output to be formatted by using ul and li elements instead of a table. OrderedList. This option causes the HTML output to be formatted by using ol and li elements instead of a table. For examples of HTML that is rendered for the new options, see the RepeatLayout enumeration. Header and Footer Elements for the Table Control In ASP.NET 3.5, the Table control can be configured to render thead and tfoot elements by setting the TableSection property of the TableHeaderRow class and the TableFooterRow class. In ASP.NET 4 these properties are set to the appropriate values by default. CSS and ARIA Support for the Menu Control In ASP.NET 3.5, the Menu control uses HTML table elements for visual formatting, and in some configurations it is not keyboard-accessible. ASP.NET 4 addresses these problems and improves accessibility in the following ways: The generated HTML is structured as an unordered list (ul and li elements). CSS is used for visual formatting. The menu behaves in accordance with ARIA standards for keyboard access. You can use arrow keys to navigate menu items. (For information about ARIA, see Accessibility in Visual Studio and ASP.NET.) ARIA role and property attributes are added to the generated HTML. (Attributes are added by using JavaScript instead of included in the HTML, to avoid generating HTML that would cause markup validation errors.) Styles for the Menu control are rendered in a style block at the top of the page, instead of inline with the rendered HTML elements. If you want to use a separate CSS file so that you can modify the menu styles, you can set the Menu control's new IncludeStyleBlock property to false, in which case the style block is not generated. Valid XHTML for the HtmlForm Control In ASP.NET 3.5, the HtmlForm control (which is created implicitly by the <form runat="server"> tag) renders an HTML form element that has both name and id attributes. The name attribute is deprecated in XHTML 1.1. Therefore, this control does not render the name attribute in ASP.NET 4. Maintaining Backward Compatibility in Control Rendering An existing ASP.NET Web site might have code in it that assumes that controls are rendering HTML the way they do in ASP.NET 3.5. To avoid causing backward compatibility problems when you upgrade the site to ASP.NET 4, you can have ASP.NET continue to generate HTML the way it does in ASP.NET 3.5 after you upgrade the site. To do so, you can set the controlRenderingCompatibilityVersion attribute of the pages element to "3.5" in the Web.config file of an ASP.NET 4 Web site, as shown in the following example: <system.web>   <pages controlRenderingCompatibilityVersion="3.5"/> </system.web> If this setting is omitted, the default value is the same as the version of ASP.NET that the Web site targets. (For information about multi-targeting in ASP.NET, see .NET Framework Multi-Targeting for ASP.NET Web Projects.) ASP.NET MVC ASP.NET MVC helps Web developers build compelling standards-based Web sites that are easy to maintain because it decreases the dependency among application layers by using the Model-View-Controller (MVC) pattern. MVC provides complete control over the page markup. It also improves testability by inherently supporting Test Driven Development (TDD). Web sites created using ASP.NET MVC have a modular architecture. This allows members of a team to work independently on the various modules and can be used to improve collaboration. For example, developers can work on the model and controller layers (data and logic), while the designer work on the view (presentation). For tutorials, walkthroughs, conceptual content, code samples, and a complete API reference, see ASP.NET MVC 2. Dynamic Data Dynamic Data was introduced in the .NET Framework 3.5 SP1 release in mid-2008. This feature provides many enhancements for creating data-driven applications, such as the following: A RAD experience for quickly building a data-driven Web site. Automatic validation that is based on constraints defined in the data model. The ability to easily change the markup that is generated for fields in the GridView and DetailsView controls by using field templates that are part of your Dynamic Data project. For ASP.NET 4, Dynamic Data has been enhanced to give developers even more power for quickly building data-driven Web sites. For more information, see ASP.NET Dynamic Data Content Map. Enabling Dynamic Data for Individual Data-Bound Controls in Existing Web Applications You can use Dynamic Data features in existing ASP.NET Web applications that do not use scaffolding by enabling Dynamic Data for individual data-bound controls. Dynamic Data provides the presentation and data layer support for rendering these controls. When you enable Dynamic Data for data-bound controls, you get the following benefits: Setting default values for data fields. Dynamic Data enables you to provide default values at run time for fields in a data control. Interacting with the database without creating and registering a data model. Automatically validating the data that is entered by the user without writing any code. For more information, see Walkthrough: Enabling Dynamic Data in ASP.NET Data-Bound Controls. New Field Templates for URLs and E-mail Addresses ASP.NET 4 introduces two new built-in field templates, EmailAddress.ascx and Url.ascx. These templates are used for fields that are marked as EmailAddress or Url using the DataTypeAttribute attribute. For EmailAddress objects, the field is displayed as a hyperlink that is created by using the mailto: protocol. When users click the link, it opens the user's e-mail client and creates a skeleton message. Objects typed as Url are displayed as ordinary hyperlinks. The following example shows how to mark fields. [DataType(DataType.EmailAddress)] public object HomeEmail { get; set; } [DataType(DataType.Url)] public object Website { get; set; } Creating Links with the DynamicHyperLink Control Dynamic Data uses the new routing feature that was added in the .NET Framework 3.5 SP1 to control the URLs that users see when they access the Web site. The new DynamicHyperLink control makes it easy to build links to pages in a Dynamic Data site. For information, see How to: Create Table Action Links in Dynamic Data Support for Inheritance in the Data Model Both the ADO.NET Entity Framework and LINQ to SQL support inheritance in their data models. An example of this might be a database that has an InsurancePolicy table. It might also contain CarPolicy and HousePolicy tables that have the same fields as InsurancePolicy and then add more fields. Dynamic Data has been modified to understand inherited objects in the data model and to support scaffolding for the inherited tables. For more information, see Walkthrough: Mapping Table-per-Hierarchy Inheritance in Dynamic Data. Support for Many-to-Many Relationships (Entity Framework Only) The Entity Framework has rich support for many-to-many relationships between tables, which is implemented by exposing the relationship as a collection on an Entity object. New field templates (ManyToMany.ascx and ManyToMany_Edit.ascx) have been added to provide support for displaying and editing data that is involved in many-to-many relationships. For more information, see Working with Many-to-Many Data Relationships in Dynamic Data. New Attributes to Control Display and Support Enumerations The DisplayAttribute has been added to give you additional control over how fields are displayed. The DisplayNameAttribute attribute in earlier versions of Dynamic Data enabled you to change the name that is used as a caption for a field. The new DisplayAttribute class lets you specify more options for displaying a field, such as the order in which a field is displayed and whether a field will be used as a filter. The attribute also provides independent control of the name that is used for the labels in a GridView control, the name that is used in a DetailsView control, the help text for the field, and the watermark used for the field (if the field accepts text input). The EnumDataTypeAttribute class has been added to let you map fields to enumerations. When you apply this attribute to a field, you specify an enumeration type. Dynamic Data uses the new Enumeration.ascx field template to create UI for displaying and editing enumeration values. The template maps the values from the database to the names in the enumeration. Enhanced Support for Filters Dynamic Data 1.0 had built-in filters for Boolean columns and foreign-key columns. The filters did not let you specify the order in which they were displayed. The new DisplayAttribute attribute addresses this by giving you control over whether a column appears as a filter and in what order it will be displayed. An additional enhancement is that filtering support has been rewritten to use the new QueryExtender feature of Web Forms. This lets you create filters without requiring knowledge of the data source control that the filters will be used with. Along with these extensions, filters have also been turned into template controls, which lets you add new ones. Finally, the DisplayAttribute class mentioned earlier allows the default filter to be overridden, in the same way that UIHint allows the default field template for a column to be overridden. For more information, see Walkthrough: Filtering Rows in Tables That Have a Parent-Child Relationship and QueryableFilterRepeater. ASP.NET Chart Control The ASP.NET chart server control enables you to create ASP.NET pages applications that have simple, intuitive charts for complex statistical or financial analysis. The chart control supports the following features: Data series, chart areas, axes, legends, labels, titles, and more. Data binding. Data manipulation, such as copying, splitting, merging, alignment, grouping, sorting, searching, and filtering. Statistical formulas and financial formulas. Advanced chart appearance, such as 3-D, anti-aliasing, lighting, and perspective. Events and customizations. Interactivity and Microsoft Ajax. Support for the Ajax Content Delivery Network (CDN), which provides an optimized way for you to add Microsoft Ajax Library and jQuery scripts to your Web applications. For more information, see Chart Web Server Control Overview. Visual Web Developer Enhancements The following sections provide information about enhancements and new features in Visual Studio 2010 and Visual Web Developer Express. The Web page designer in Visual Studio 2010 has been enhanced for better CSS compatibility, includes additional support for HTML and ASP.NET markup snippets, and features a redesigned version of IntelliSense for JScript. Improved CSS Compatibility The Visual Web Developer designer in Visual Studio 2010 has been updated to improve CSS 2.1 standards compliance. The designer better preserves HTML source code and is more robust than in previous versions of Visual Studio. HTML and JScript Snippets In the HTML editor, IntelliSense auto-completes tag names. The IntelliSense Snippets feature auto-completes whole tags and more. In Visual Studio 2010, IntelliSense snippets are supported for JScript, alongside C# and Visual Basic, which were supported in earlier versions of Visual Studio. Visual Studio 2010 includes over 200 snippets that help you auto-complete common ASP.NET and HTML tags, including required attributes (such as runat="server") and common attributes specific to a tag (such as ID, DataSourceID, ControlToValidate, and Text). You can download additional snippets, or you can write your own snippets that encapsulate the blocks of markup that you or your team use for common tasks. For more information on HTML snippets, see Walkthrough: Using HTML Snippets. JScript IntelliSense Enhancements In Visual 2010, JScript IntelliSense has been redesigned to provide an even richer editing experience. IntelliSense now recognizes objects that have been dynamically generated by methods such as registerNamespace and by similar techniques used by other JavaScript frameworks. Performance has been improved to analyze large libraries of script and to display IntelliSense with little or no processing delay. Compatibility has been significantly increased to support almost all third-party libraries and to support diverse coding styles. Documentation comments are now parsed as you type and are immediately leveraged by IntelliSense. Web Application Deployment with Visual Studio 2010 For Web application projects, Visual Studio now provides tools that work with the IIS Web Deployment Tool (Web Deploy) to automate many processes that had to be done manually in earlier versions of ASP.NET. For example, the following tasks can now be automated: Creating an IIS application on the destination computer and configuring IIS settings. Copying files to the destination computer. Changing Web.config settings that must be different in the destination environment. Propagating changes to data or data structures in SQL Server databases that are used by the Web application. For more information about Web application deployment, see ASP.NET Deployment Content Map. Enhancements to ASP.NET Multi-Targeting ASP.NET 4 adds new features to the multi-targeting feature to make it easier to work with projects that target earlier versions of the .NET Framework. Multi-targeting was introduced in ASP.NET 3.5 to enable you to use the latest version of Visual Studio without having to upgrade existing Web sites or Web services to the latest version of the .NET Framework. In Visual Studio 2008, when you work with a project targeted for an earlier version of the .NET Framework, most features of the development environment adapt to the targeted version. However, IntelliSense displays language features that are available in the current version, and property windows display properties available in the current version. In Visual Studio 2010, only language features and properties available in the targeted version of the .NET Framework are shown. For more information about multi-targeting, see the following topics: .NET Framework Multi-Targeting for ASP.NET Web Projects ASP.NET Side-by-Side Execution Overview How to: Host Web Applications That Use Different Versions of the .NET Framework on the Same Server How to: Deploy Web Site Projects Targeted for Earlier Versions of the .NET Framework

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

  • Hosting WCF service in Windows Service

    - by DigiMortal
    When building Windows services we often need a way to communicate with them. The natural way to communicate to service is to send signals to it. But this is very limited communication. Usually we need more powerful communication mechanisms with services. In this posting I will show you how to use service-hosted WCF web service to communicate with Windows service. Create Windows service Suppose you have Windows service created and service class is named as MyWindowsService. This is new service and all we have is default code that Visual Studio generates. Create WCF service Add reference to System.ServiceModel assembly to Windows service project and add new interface called IMyService. This interface defines our service contracts. [ServiceContract] public interface IMyService {     [OperationContract]     string SayHello(int value); } We keep this service simple so it is easy for you to follow the code. Now let’s add service implementation: [ServiceBehavior(InstanceContextMode=InstanceContextMode.Single)] public class MyService : IMyService {     public string SayHello(int value)     {         return string.Format("Hello, : {0}", value);     } } With ServiceBehavior attribute we say that we need only one instance of WCF service to serve all requests. Usually this is more than enough for us. Hosting WCF service in Windows Service Now it’s time to host our WCF service and make it available in Windows service. Here is the code in my Windows service: public partial class MyWindowsService : ServiceBase {     private ServiceHost _host;     private MyService _server;       public MyWindowsService()     {         InitializeComponent();     }       protected override void OnStart(string[] args)     {         _server = new MyService();         _host = new ServiceHost(_server);         _host.Open();     }       protected override void OnStop()     {         _host.Close();     } } Our Windows service now hosts our WCF service. WCF service will be available when Windows service is started and it is taken down when Windows service stops. Configuring WCF service To make WCF service usable we need to configure it. Add app.config file to your Windows service project and paste the following XML there: <system.serviceModel>   <serviceHostingEnvironment aspNetCompatibilityEnabled="true" />   <services>     <service name="MyWindowsService.MyService" behaviorConfiguration="def">       <host>         <baseAddresses>           <add baseAddress="http://localhost:8732/MyService/"/>         </baseAddresses>       </host>       <endpoint address="" binding="wsHttpBinding" contract="MyWindowsService.IMyService">       </endpoint>       <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>     </service>   </services>   <behaviors>     <serviceBehaviors>       <behavior name="def">         <serviceMetadata httpGetEnabled="True"/>         <serviceDebug includeExceptionDetailInFaults="True"/>       </behavior>     </serviceBehaviors>   </behaviors> </system.serviceModel> Now you are ready to test your service. Install Windows service and start it. Open your browser and open the following address: http://localhost:8732/MyService/ You should see your WCF service page now. Conclusion WCF is not only web applications fun. You can use WCF also as self-hosted service. Windows services that lack good communication possibilities can be saved by using WCF self-hosted service as it is the best way to talk to service. We can also revert the context and say that Windows service is good host for our WCF service.

    Read the article

  • Process.Start() and ShellExecute() fails with URLs on Windows 8

    - by Rick Strahl
    Since I installed Windows 8 I've noticed that a number of my applications appear to have problems opening URLs. That is when I click on a link inside of a Windows application, either nothing happens or there's an error that occurs. It's happening both to my own applications and a host of Windows applications I'm running. At first I thought this was an issue with my default browser (Chrome) but after switching the default browser to a few others and experimenting a bit I noticed that the errors occur - oddly enough - only when I run an application as an Administrator. I also tried switching to FireFox and Opera as my default browser and saw exactly the same behavior. The scenario for this is a bit bizarre: Running on Windows 8 Call Process.Start() (or ShellExecute() in Win32 API) with a URL or an HTML file Run 'As Administrator' (works fine under non-elevated user account!) or with UAC off A browser other than Internet Explorer is set as your Default Web Browser Talk about a weird scenario: Something that doesn't work when you run as an Administrator which is supposed to have rights to everything on the system! Instead running under an Admin account - either elevated with a User Account Control prompt or even when running as a full Administrator fails. It appears that this problem does not occur for everyone, but when I looked for a solution to this, I saw quite a few posts in relation to this with no clear resolutions. I have three Windows 8 machines running here in the office and all three of them showed this behavior. Lest you think this is just a programmer's problem - this can affect any software running on your system that needs to run under administrative rights. Try it out Now, in order for this next example to fail, any browser but Internet Explorer has to be your default browser and even then it may not fail depending on how you installed your browser. To see if this is a problem create a small Console application and call Process.Start() with a URL in it:namespace Win8ShellBugConsole { class Program { static void Main(string[] args) { Console.WriteLine("Launching Url..."); Process.Start("http://microsoft.com"); Console.Write("Press any key to continue..."); Console.ReadKey(); Console.WriteLine("\r\n\r\nLaunching image..."); Process.Start(Path.GetFullPath(@"..\..\sailbig.jpg")); Console.Write("Press any key to continue..."); Console.ReadKey(); } } } Compile this code. Then execute the code from Explorer (not from Visual Studio because that may change the permissions). If you simply run the EXE and you're not running as an administrator, you'll see the Web page pop up in the browser as well as the image loading. Now run the same thing with Run As Administrator: Now when you run it you get a nice error when Process.Start() is fired: The same happens if you are running with User Account Control off altogether - ie. you are running as a full admin account. Now if you comment out the URL in the code above and just fire the image display - that works just fine in any user mode. As does opening any other local file type or even starting a new EXE locally (ie. Process.Start("c:\windows\notepad.exe"). All that works, EXCEPT for URLs. The code above uses Process.Start() in .NET but the same happens in Win32 Applications that use the ShellExecute API. In some of my older Fox apps ShellExecute returns an error code of 31 - which is No Shell Association found. What's the Deal? It turns out the problem has to do with the way browsers are registering themselves on Windows. Internet Explorer - being a built-in application in Windows 8 - apparently does this correctly, but other browsers possibly don't or at least didn't at the time I installed them. So even Chrome, which continually updates itself, has a recent version that apparently has this registration issue fixed, I was unable to simply set IE as my default browser then use Chrome to 'Set as Default Browser'. It still didn't work. Neither did using the Set Program Associations dialog which lets you assign what extensions are mapped to by a given application. Each application provides a set of extension/moniker mappings that it supports and this dialog lets you associate them on a system wide basis. This also did not work for Chrome or any of the other browsers at first. However, after repeated retries here eventually I did manage to get FireFox to work, but not any of the others. What Works? Reinstall the Browser In the end I decided on the hard core pull the plug solution: Totally uninstall and re-install Chrome in this case. And lo and behold, after reinstall everything was working fine. Now even removing the association for Chrome, switching to IE as the default browser and then back to Chrome works. But, even though the version of Chrome I was running before uninstalling and reinstalling is the same as I'm running now after the reinstall now it works. Of course I had to find out the hard way, before Richard commented with a note regarding what the issue is with Chrome at least: http://code.google.com/p/chromium/issues/detail?id=156400 As expected the issue is a registration issue - with keys not being registered at the machine level. Reading this I'm still not sure why this should be a problem - an elevated account still runs under the same user account (ie. I'm still rickstrahl even if I Run As Administrator), so why shouldn't an app be able to read my Current User registry hive? And also that doesn't quite explain why if I register the extensions using Run As Administrator in Chrome when using Set as Default Browser). But in the end it works… Not so fast It's now a couple of days later and still there are some oddball problems although this time they appear to be purely Chrome issues. After the reinstall Chrome seems to pop up properly with ShellExecute() calls both in regular user and Admin mode. However, it now looks like Chrome is actually running two completely separate user profiles for each. For example, when I run Visual Studio in Admin mode and go to View in browser, Chrome complains that it was installed in Admin mode and can't launch (WTF?). Then you retry a few times later and it ends up working. When launched that way some of the plug-ins installed don't show up with the effect that sometimes they're visible sometimes they're not. Also Chrome seems to loose my configuration and Google sign in between sessions now, presumably when switching user modes. Add-ins installed in admin mode don't show up in user mode and vice versa. Ah, this is lovely. Did I mention that I freaking hate UAC precisely because of this kind of bullshit. You can never tell exactly what account your app is running under, and apparently apps also have a hard time trying to put data into the right place that works for both scenarios. And as my recent post on using Windows Live accounts shows it's yet another level of abstraction ontop of the underlying system identity that can cause all sort of small side effect headaches like this. Hopefully, most of you are skirting this issue altogether - having installed more recent versions of your favorite browsers. If not, hopefully this post will take you straight to reinstallation to fix this annoying issue.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL University: Database testing and refactoring tools and examples

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Database testing and refactoring tools and examples This is the third and last part of the series and in it we’ll take a look at tools we can test and refactor with plus some an example of the both. Tools of the trade First a few thoughts about how to go about testing a database. I'm firmily against any testing tools that go into the database itself or need an extra database. Unit tests for the database and applications using the database should all be in one place using the same technology. By using database specific frameworks we fragment our tests into many places and increase test system complexity. Let’s take a look at some testing tools. 1. NUnit, xUnit, MbUnit All three are .Net testing frameworks meant to unit test .Net application. But we can test databases with them just fine. I use NUnit because I’ve always used it for work and personal projects. One day this might change. So the thing to remember is to be flexible if something better comes along. All three are quite similar and you should be able to switch between them without much problem. 2. TSQLUnit As much as this framework is helpful for the non-C# savvy folks I don’t like it for the reason I stated above. It lives in the database and thus fragments the testing infrastructure. Also it appears that it’s not being actively developed anymore. 3. DbFit I haven’t had the pleasure of trying this tool just yet but it’s on my to-do list. From what I’ve read and heard Gojko Adzic (@gojkoadzic on Twitter) has done a remarkable job with it. 4. Redgate SQL Refactor and Apex SQL Refactor Neither of these refactoring tools are free, however if you have hardcore refactoring planned they are worth while looking into. I’ve only used the Red Gate’s Refactor and was quite impressed with it. 5. Reverting the database state I’ve talked before about ways to revert a database to pre-test state after unit testing. This still holds and I haven’t changed my mind. Also make sure to read the comments as they are quite informative. I especially like the idea of setting up and tearing down the schema for each test group with NHibernate. Testing and refactoring example We’ll take a look at the simple schema and data test for a view and refactoring the SELECT * in that view. We’ll use a single table PhoneNumbers with ID and Phone columns. Then we’ll refactor the Phone column into 3 columns Prefix, Number and Suffix. Lastly we’ll remove the original Phone column. Then we’ll check how the view behaves with tests in NUnit. The comments in code explain the problem so be sure to read them. I’m assuming you know NUnit and C#. T-SQL Code C# test code USE tempdbGOCREATE TABLE PhoneNumbers( ID INT IDENTITY(1,1), Phone VARCHAR(20))GOINSERT INTO PhoneNumbers(Phone)SELECT '111 222333 444' UNION ALLSELECT '555 666777 888'GO-- notice we don't have WITH SCHEMABINDINGCREATE VIEW vPhoneNumbersAS SELECT * FROM PhoneNumbersGO-- Let's take a look at what the view returns -- If we add a new columns and rows both tests will failSELECT *FROM vPhoneNumbers GO -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- refactor to split Phone column into 3 partsALTER TABLE PhoneNumbers ADD Prefix VARCHAR(3)ALTER TABLE PhoneNumbers ADD Number VARCHAR(6)ALTER TABLE PhoneNumbers ADD Suffix VARCHAR(3)GO-- update the new columnsUPDATE PhoneNumbers SET Prefix = LEFT(Phone, 3), Number = SUBSTRING(Phone, 5, 6), Suffix = RIGHT(Phone, 3)GO-- remove the old columnALTER TABLE PhoneNumbers DROP COLUMN PhoneGO-- This returns unexpected results!-- it returns 2 columns ID and Phone even though -- we don't have a Phone column anymore.-- Notice that the data is from the Prefix column-- This is a danger of SELECT *SELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will FAIL -- for a fix we have to call sp_refreshview -- to refresh the view definitionEXEC sp_refreshview 'vPhoneNumbers'-- after the refresh the view returns 4 columns-- this breaks the input/output behavior of the database-- which refactoring MUST NOT doSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will FAIL -- DoesViewReturnCorrectData test will FAIL -- to fix the input/output behavior change problem -- we have to concat the 3 columns into one named PhoneALTER VIEW vPhoneNumbersASSELECT ID, Prefix + ' ' + Number + ' ' + Suffix AS PhoneFROM PhoneNumbersGO-- now it works as expectedSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- clean upDROP VIEW vPhoneNumbersDROP TABLE PhoneNumbers [Test]public void DoesViewReturnCoorectColumns(){ // conn is a valid SqlConnection to the server's tempdb // note the SET FMTONLY ON with which we return only schema and no data using (SqlCommand cmd = new SqlCommand("SET FMTONLY ON; SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned schema: number of columns, column names and data types Assert.AreEqual(dt.Columns.Count, 2); Assert.AreEqual(dt.Columns[0].Caption, "ID"); Assert.AreEqual(dt.Columns[0].DataType, typeof(int)); Assert.AreEqual(dt.Columns[1].Caption, "Phone"); Assert.AreEqual(dt.Columns[1].DataType, typeof(string)); }} [Test]public void DoesViewReturnCorrectData(){ // conn is a valid SqlConnection to the server's tempdb using (SqlCommand cmd = new SqlCommand("SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned data: number of rows and their values Assert.AreEqual(dt.Rows.Count, 2); Assert.AreEqual(dt.Rows[0]["ID"], 1); Assert.AreEqual(dt.Rows[0]["Phone"], "111 222333 444"); Assert.AreEqual(dt.Rows[1]["ID"], 2); Assert.AreEqual(dt.Rows[1]["Phone"], "555 666777 888"); }}   With this simple example we’ve seen how a very simple schema can cause a lot of problems in the whole application/database system if it doesn’t have tests. Imagine what would happen if some outside process would depend on that view. It would get wrong data and propagate it silently throughout the system. And that is not good. So have tests at least for the crucial parts of your systems. And with that we conclude the Database Testing and Refactoring week at SQL University. Hope you learned something new and enjoy the learning weeks to come. Have fun!

    Read the article

  • Physical Directories vs. MVC View Paths

    - by Rick Strahl
    This post falls into the bucket of operator error on my part, but I want to share this anyway because it describes an issue that has bitten me a few times now and writing it down might keep it a little stronger in my mind. I've been working on an MVC project the last few days, and at the end of a long day I accidentally moved one of my View folders from the MVC Root Folder to the project root. It must have been at the very end of the day before shutting down because tests and manual site navigation worked fine just before I quit for the night. I checked in changes and called it a night. Next day I came back, started running the app and had a lot of breaks with certain views. Oddly custom routes to these controllers/views worked, but stock /{controller}/{action} routes would not. After a bit of spelunking I realized that "Hey one of my View Folders is missing", which made some sense given the error messages I got. I looked in the recycle bin - nothing there, so rather than try to figure out what the hell happened, just restored from my last SVN checkin. At this point the folders are back… but… view access  still ends up breaking for this set of views. Specifically I'm getting the Yellow Screen of Death with: CS0103: The name 'model' does not exist in the current context Here's the full error: Server Error in '/ClassifiedsWeb' Application. Compilation ErrorDescription: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately.Compiler Error Message: CS0103: The name 'model' does not exist in the current contextSource Error: Line 1: @model ClassifiedsWeb.EntryViewModel Line 2: @{ Line 3: ViewBag.Title = Model.Entry.Title + " - " + ClassifiedsBusiness.App.Configuration.ApplicationName; Source File: c:\Projects2010\Clients\GorgeNet\Classifieds\ClassifiedsWeb\Classifieds\Show.cshtml    Line: 1 Compiler Warning Messages: Show Detailed Compiler Output: Show Complete Compilation Source: Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.272 Here's what's really odd about this error: The views now do exist in the /Views/Classifieds folder of the project, but it appears like MVC is trying to execute the views directly. This is getting pretty weird, man! So I hook up some break points in my controllers to see if my controller actions are getting fired - and sure enough it turns out they are not - but only for those views that were previously 'lost' and then restored from SVN. WTF? At this point I'm thinking that I must have messed up one of the config files, but after some more spelunking and realizing that all the other Controller views work, I give up that idea. Config's gotta be OK if other controllers and views are working. Root Folders and MVC Views don't mix As I mentioned the problem was the fact that I inadvertantly managed to drag my View folder to the root folder of the project. Here's what this looks like in my FUBAR'd project structure after I copied back /Views/Classifieds folder from SVN: There's the actual root folder in the /Views folder and the accidental copy that sits of the root. I of course did not notice the /Classifieds folder at the root because it was excluded and didn't show up in the project. Now, before you call me a complete idiot remember that this happened by accident - an accidental drag probably just before shutting down for the night. :-) So why does this break? MVC should be happy with views in the /Views/Classifieds folder right? While MVC might be happy, IIS is not. The fact that there is a physical folder on disk takes precedence over MVC's routing. In other words if a URL exists that matches a route the pysical path is accessed first. What happens here is that essentially IIS is trying to execute the .cshtml pages directly without ever routing to the Controller methods. In the error page I showed above my clue should have been that the view was served as: c:\Projects2010\Clients\GorgeNet\Classifieds\ClassifiedsWeb\Classifieds\Show.cshtml rather than c:\Projects2010\Clients\GorgeNet\Classifieds\ClassifiedsWeb\Views\Classifieds\Show.cshtml But of course I didn't notice that right away, just skimming to the end and looking at the file name. The reason that /classifieds/list actually fires that file is that the ASP.NET Web Pages engine looks for physical files on disk that match a path. IOW, when calling Web Pages you drop the .cshtml off the Razor page and IIS will serve that just fine. So: /classifieds/list looks and tries to find /classifieds/list.cshtml and executes that script. And that is exactly what's happening. Web Pages is trying to execute the .cshtml file and it fails because Web Pages knows nothing about the @model tag which is an MVC specific template extension. This is why my breakpoints in the controller methods didn't fire and it also explains why the error mentions that the @model key word is invalid (@model is an MVC provided template enhancement to the Razor Engine). The solution of course is super simple: Delete the accidentally created root folder and the problem is solved. Routing and Physical Paths I've run into problems with this before actually. In the past I've had a number of applications that had a physical /Admin folder which also would conflict with an MVC Admin controller. More than once I ended up wondering why the index route (/Admin/) was not working properly. If a physical /Admin folder exists /Admin will not route to the Index action (or whatever default action you have set up, but instead try to list the directory or show the default document in the folder. The only way to force the index page through MVC is to explicitly use /Admin/Index. Makes perfect sense once you realize the physical folder is there, but that's easy to forget in an MVC application. As you might imagine after a few times of running into this I gave up on the Admin folder and moved everything into MVC views to handle those operations. Still it's one of those things that can easily bite you, because the behavior and error messages seem to point at completely different  problems. Moral of the story is: If you see routing problems where routes are not reaching obvious controller methods, always check to make sure there's isn't a physical path being mapped by IIS instead. That way you won't feel stupid like I did after trying a million things for about an hour before discovering my sloppy mousing behavior :-)© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

  • CLR Version issues with CorBindRuntimeEx

    - by Rick Strahl
    I’m working on an older FoxPro application that’s using .NET Interop and this app loads its own copy of the .NET runtime through some of our own tools (wwDotNetBridge). This all works fine and it’s fairly straightforward to load and host the runtime and then make calls against it. I’m writing this up for myself mostly because I’ve been bitten by these issues repeatedly and spend 15 minutes each However, things get tricky when calling specific versions of the .NET runtime since .NET 4.0 has shipped. Basically we need to be able to support both .NET 2.0 and 4.0 and we’re currently doing it with the same assembly – a .NET 2.0 assembly that is the AppDomain entry point. This works as .NET 4.0 can easily host .NET 2.0 assemblies and the functionality in the 2.0 assembly provides all the features we need to call .NET 4.0 assemblies via Reflection. In wwDotnetBridge we provide a load flag that allows specification of the runtime version to use. Something like this: do wwDotNetBridge LOCAL loBridge as wwDotNetBridge loBridge = CreateObject("wwDotNetBridge","v4.0.30319") and this works just fine in most cases.  If I specify V4 internally that gets fixed up to a whole version number like “v4.0.30319” which is then actually used to host the .NET runtime. Specifically the ClrVersion setting is handled in this Win32 DLL code that handles loading the runtime for me: /// Starts up the CLR and creates a Default AppDomain DWORD WINAPI ClrLoad(char *ErrorMessage, DWORD *dwErrorSize) { if (spDefAppDomain) return 1; //Retrieve a pointer to the ICorRuntimeHost interface HRESULT hr = CorBindToRuntimeEx( ClrVersion, //Retrieve latest version by default L"wks", //Request a WorkStation build of the CLR STARTUP_LOADER_OPTIMIZATION_MULTI_DOMAIN | STARTUP_CONCURRENT_GC, CLSID_CorRuntimeHost, IID_ICorRuntimeHost, (void**)&spRuntimeHost ); if (FAILED(hr)) { *dwErrorSize = SetError(hr,ErrorMessage); return hr; } //Start the CLR hr = spRuntimeHost->Start(); if (FAILED(hr)) return hr; CComPtr<IUnknown> pUnk; WCHAR domainId[50]; swprintf(domainId,L"%s_%i",L"wwDotNetBridge",GetTickCount()); hr = spRuntimeHost->CreateDomain(domainId,NULL,&pUnk); hr = pUnk->QueryInterface(&spDefAppDomain.p); if (FAILED(hr)) return hr; return 1; } CorBindToRuntimeEx allows for a specific .NET version string to be supplied which is what I’m doing via an API call from the FoxPro code. The behavior of CorBindToRuntimeEx is a bit finicky however. The documentation states that NULL should load the latest version of the .NET runtime available on the machine – but it actually doesn’t. As far as I can see – regardless of runtime overrides even in the .config file – NULL will always load .NET 2.0 even if 4.0 is installed. <supportedRuntime> .config File Settings Things get even more unpredictable once you start adding runtime overrides into the application’s .config file. In my scenario working inside of Visual FoxPro this would be VFP9.exe.config in the FoxPro installation folder (not the current folder). If I have a specific runtime override in the .config file like this: <?xml version="1.0"?> <configuration> <startup> <supportedRuntime version="v2.0.50727" /> </startup> </configuration> Not surprisingly with this I can load a .NET 2.0  runtime, but I will not be able to load Version 4.0 of the .NET runtime even if I explicitly specify it in my call to ClrLoad. Worse I don’t get an error – it will just go ahead and hand me a V2 version of the runtime and assume that’s what I wanted. Yuck! However, if I set the supported runtime to V4 in the .config file: <?xml version="1.0"?> <configuration> <startup> <supportedRuntime version="v4.0.30319" /> </startup> </configuration> Then I can load both V4 and V2 of the runtime. Specifying NULL however will STILL only give me V2 of the runtime. Again this seems pretty inconsistent. If you’re hosting runtimes make sure you check which version of the runtime is actually loading first to ensure you get the one you’re looking for. If the wrong version loads – say 2.0 and you want 4.0 - and you then proceed to load 4.0 assemblies they will all fail to load due to version mismatches. This is how all of this started – I had a bunch of assemblies that weren’t loading and it took a while to figure out that the host was running the wrong version of the CLR and therefore caused the assemblies loading to fail. Arrggh! <supportedRuntime> and Debugger Version <supportedRuntime> also affects the use of the .NET debugger when attached to the target application. Whichever runtime is specified in the key is the version of the debugger that fires up. This can have some interesting side effects. If you load a .NET 2.0 assembly but <supportedRuntime> points at V4.0 (or vice versa) the debugger will never fire because it can only debug in the appropriate runtime version. This has bitten me on several occasions where code runs just fine but the debugger will just breeze by breakpoints without notice. The default version for the debugger is the latest version installed on the system if <supportedRuntime> is not set. Summary Besides all the hassels, I’m thankful I can build a .NET 2.0 assembly and have it host .NET 4.0 and call .NET 4.0 code. This way we’re able to ship a single assembly that provides functionality that supports both .NET 2 and 4 without having to have separate DLLs for both which would be a deployment and update nightmare. The MSDN documentation does point at newer hosting API’s specifically for .NET 4.0 which are way more complicated and even less documented but that doesn’t help here because the runtime needs to be able to host both .NET 4.0 and 2.0. Not pleased about that – the new APIs look way more complex and of course they’re not available with older versions of the runtime installed which in our case makes them useless to me in this scenario where I have to support .NET 2.0 hosting (to provide greater ‘built-in’ platform support). Once you know the behavior above, it’s manageable. However, it’s quite easy to get tripped up here because there are multiple combinations that can really screw up behaviors.© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  FoxPro  

    Read the article

  • Modern/Metro Internet Explorer: What were they thinking???

    - by Rick Strahl
    As I installed Windows 8.1 last week I decided that I really should take a closer look at Internet Explorer in the Modern/Metro environment again. Right away I ran into two issues that are real head scratchers to me.Modern Split Windows don't resize Viewport but Zoom OutThis one falls in the "WTF, really?" department: It looks like Modern Internet Explorer's Modern doesn't resize the browser window as every other browser (including IE 11 on the desktop) does, but rather tries to adjust the zoom to the width of the browser. This means that if you use the Modern IE browser and you split the display between IE and another application, IE will be zoomed out, with text becoming much, much smaller, rather than resizing the browser viewport and adjusting the pixel width as you would when a browser window is typically resized.Here's what I'm talking about in a couple of pictures. First here's the full screen Internet Explorer version (this shot is resized down since it's full screen at 1080p, click to see the full image):This brings up the first issue which is: On the desktop who wants to browse a site full screen? Most sites aren't fully optimized for 1080p widescreen experience and frankly most content that wide just looks weird. Even in typical 10" resolutions of 1280 width it's weird to look at things this way. At least this issue can be worked around with @media queries and either constraining the view, or adding additional content to make use of the extra space. Still running a desktop browser full screen is not optimal on a desktop machine - ever.Regardless, this view, while oversized, is what I expect: Everything is rendered in the right ratios, with font-size and the responsive design styling properly respected.But now look what happens when you split the desktop windows and show half desktop and have modern IE (this screen shot is not resized but cropped - this is actual size content as you can see in the cropped Twitter window on the right half of the screen):What's happening here is that IE is zooming out of the content to make it fit into the smaller width, shrinking the content rather than resizing the viewport's pixel width. In effect it looks like the pixel width stays at 1080px and the viewport expands out height-wise in response resulting in some crazy long portrait view.There goes responsive design - out the window literally. If you've built your site using @media queries and fixed viewport sizes, Internet Explorer completely screws you in this split view. On my 1080p monitor, the site shown at a little under half width becomes completely unreadable as the fonts are too small and break up. As you go into split view and you resize the window handle the content of the browser gets smaller and smaller (and effectively longer and longer on the bottom) effectively throwing off any responsive layout to the point of un-readability even on a big display, let alone a small tablet screen.What could POSSIBLY be the benefit of this screwed up behavior? I checked around a bit trying different pages in this shrunk down view. Other than the Microsoft home page, every page I went to was nearly unreadable at a quarter width. The only page I found that worked 'normally' was the Microsoft home page which undoubtedly is optimized just for Internet Explorer specifically.Bottom Address Bar opaquely overlays ContentAnother problematic feature for me is the browser address bar on the bottom. Modern IE shows the status bar opaquely on the bottom, overlaying the content area of the Web Page - until you click on the page. Until you do though, the address bar overlays the bottom content solidly. And not just a little bit but by good sizable chunk.In the application from the screen shot above I have an application toolbar on the bottom and the IE Address bar completely hides that bottom toolbar when the page is first loaded, until the user clicks into the content at which point the address bar shrinks down to a fat border style bar with a … on it. Toolbars on the bottom are pretty common these days, especially for mobile optimized applications, so I'd say this is a common use case. But even if you don't have toolbars on the bottom maybe there's other fixed content on the bottom of the page that is vital to display. While other browsers often also show address bars and then later hide them, these other browsers tend to resize the viewport when the address bar status changes, so the content can respond to the size change. Not so with Modern IE. The address bar overlays content and stays visible until content is clicked. No resize notification or viewport height change is sent to the browser.So basically Internet Explorer is telling me: "Our toolbar is more important than your content!" - AND gives me no chance to re-act to that behavior. The result on this page/application is that the user sees no actionable operations until he or she clicks into the content area, which is terrible from a UI perspective as the user has no idea what options are available on initial load.It's doubly confounding in that IE is running in full screen mode and has an the entire height of the screen at its disposal - there's plenty of real estate available to not require this sort of hiding of content in the first place. Heck, even Windows Phone with its more constrained size doesn't hide content - in fact the address bar on Windows Phone 8 is always visible.What were they thinking?Every time I use anything in the Modern Metro interface in Windows 8/8.1 I get angry.  I can pretty much ignore Metro/Modern for my everyday usage, but unfortunately with Internet Explorer in the modern shell I have to live with, because there will be users using it to access my sites. I think it's inexcusable by Microsoft to build such a crappy shell around the browser that impacts the actual usability of Web content. In both of the cases above I can only scratch my head at what could have possibly motivated anybody designing the UI for the browser to make these screwed up choices, that manipulate the content in a totally unmaintainable way.© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How accurate is "Business logic should be in a service, not in a model"?

    - by Jeroen Vannevel
    Situation Earlier this evening I gave an answer to a question on StackOverflow. The question: Editing of an existing object should be done in repository layer or in service? For example if I have a User that has debt. I want to change his debt. Should I do it in UserRepository or in service for example BuyingService by getting an object, editing it and saving it ? My answer: You should leave the responsibility of mutating an object to that same object and use the repository to retrieve this object. Example situation: class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } A comment I received: Business logic should really be in a service. Not in a model. What does the internet say? So, this got me searching since I've never really (consciously) used a service layer. I started reading up on the Service Layer pattern and the Unit Of Work pattern but so far I can't say I'm convinced a service layer has to be used. Take for example this article by Martin Fowler on the anti-pattern of an Anemic Domain Model: There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data. (...) The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it. To me, this seemed exactly what the situation was about: I advocated the manipulation of an object's data by introducing methods inside that class that do just that. However I realize that this should be a given either way, and it probably has more to do with how these methods are invoked (using a repository). I also had the feeling that in that article (see below), a Service Layer is more considered as a façade that delegates work to the underlying model, than an actual work-intensive layer. Application Layer [his name for Service Layer]: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program. Which is reinforced here: Service interfaces. Services expose a service interface to which all inbound messages are sent. You can think of a service interface as a façade that exposes the business logic implemented in the application (typically, logic in the business layer) to potential consumers. And here: The service layer should be devoid of any application or business logic and should focus primarily on a few concerns. It should wrap Business Layer calls, translate your Domain in a common language that your clients can understand, and handle the communication medium between server and requesting client. This is a serious contrast to other resources that talk about the Service Layer: The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. Or the second answer to a question I've already linked: At some point, your application will want some business logic. Also, you might want to validate the input to make sure that there isn't something evil or nonperforming being requested. This logic belongs in your service layer. "Solution"? Following the guidelines in this answer, I came up with the following approach that uses a Service Layer: class UserController : Controller { private UserService _userService; public UserController(UserService userService){ _userService = userService; } public ActionResult MakeHimPay(string username, int amount) { _userService.MakeHimPay(username, amount); return RedirectToAction("ShowUserOverview"); } public ActionResult ShowUserOverview() { return View(); } } class UserService { private IUserRepository _userRepository; public UserService(IUserRepository userRepository) { _userRepository = userRepository; } public void MakeHimPay(username, amount) { _userRepository.GetUserByName(username).makePayment(amount); } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } Conclusion All together not much has changed here: code from the controller has moved to the service layer (which is a good thing, so there is an upside to this approach). However this doesn't look like it had anything to do with my original answer. I realize design patterns are guidelines, not rules set in stone to be implemented whenever possible. Yet I have not found a definitive explanation of the service layer and how it should be regarded. Is it a means to simply extract logic from the controller and put it inside a service instead? Is it supposed to form a contract between the controller and the domain? Should there be a layer between the domain and the service layer? And, last but not least: following the original comment Business logic should really be in a service. Not in a model. Is this correct? How would I introduce my business logic in a service instead of the model?

    Read the article

  • How to ensure custom serverListener events fires before action events

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Using JavaScript in ADF Faces you can queue custom events defined by an af:serverListener tag. If the custom event however is queued from an af:clientListener on a command component, then the command component's action and action listener methods fire before the queued custom event. If you have a use case, for example in combination with client side integration of 3rd party technologies like HTML, Applets or similar, then you want to change the order of execution. The way to change the execution order is to invoke the command item action from the client event method that handles the custom event propagated by the af:serverListener tag. The following four steps ensure your successful doing this 1.       Call cancel() on the event object passed to the client JavaScript function invoked by the af:clientListener tag 2.       Call the custom event as an immediate action by setting the last argument in the custom event call to true function invokeCustomEvent(evt){   evt.cancel();          var custEvent = new AdfCustomEvent(                         evt.getSource(),                         "mycustomevent",                                                                                                                    {message:"Hello World"},                         true);    custEvent.queue(); } 3.       When handling the custom event on the server, lookup the command item, for example a button, to queue its action event. This way you simulate a user clicking the button. Use the following code ActionEvent event = new ActionEvent(component); event.setPhaseId(PhaseId.INVOKE_APPLICATION); event.queue(); The component reference needs to be changed with the handle to the command item which action method you want to execute. 4.       If the command component has behavior tags, like af:fileDownloadActionListener, or af:setPropertyListener, defined, then these are also executed when the action event is queued. However, behavior tags, like the file download action listener, may require a full page refresh to be issued to work, in which case the custom event cannot be issued as a partial refresh. File download action tag: http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_fileDownloadActionListener.html " Since file downloads must be processed with an ordinary request - not XMLHttp AJAX requests - this tag forces partialSubmit to be false on the parent component, if it supports that attribute." To issue a custom event as a non-partial submit, the previously shown sample code would need to be changed as shown below function invokeCustomEvent(evt){   evt.cancel();          var custEvent = new AdfCustomEvent(                         evt.getSource(),                         "mycustomevent",                                                                                                                    {message:"Hello World"},                         true);    custEvent.queue(false); } To learn more about custom events and the af:serverListener, please refer to the tag documentation: http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_serverListener.html

    Read the article

  • Probation is Over: PASS Board Year 1, Q2

    - by Denise McInerney
    Though it's not always official every job begins with a probation period. You start out with lots of questions and every day you find out how much more you have to learn. Usually after a few months you discover that you can actually answer some questions and have at least an idea of what you are supposed to be doing. Now at the end of my second quarter on the "job" of serving on the PASS Board I have reached that point. My probation period is over. The last three months were busy for the entire Board with the budget process, an in-person meeting and moving forward with PASS Global Growth plans. I had also set a specific goal for myself for my 2nd quarter: to see the Board to adopt a Code of Conduct for the PASS Summit. Code of Conduct When I ran for the Board I included my desire to see PASS establish a code of conduct in my campaign platform.  I was motivated to do this for a few reasons. Other technical conferences have had incidents of harassment. Most of these did not have a policy in place prior to having a problem, though several conference organizers have since adopted anti-harassment policies or codes of conduct. I felt it would be in PASS' interest to establish a policy so we would be prepared should there be an incident.   "This is Community" Adopting a code of conduct would reinforce our community orientation and send a message about the positive character of the Summit. PASS is a leader among technical organizations for its promotion and support of women. Adopting a code of conduct would further demonstrate our leadership in this area. After researching similar polices from other organizations I published a first draft in April. I solicited feedback from the Board, HQ staff and some PASS members. Incorporating that feedback I presented version 4 at the May Board meeting, where we had a good discussion. You can read the meeting minutes for details. I incorporated points from  the Board discussion as well as feedback from a legal review to produce a final version which has been submitted to the Board. It will be discussed at the Board meeting July 12. You can read the full text at the end of this post. Virtual Chapters In the first quarter we started ramping up marketing support for the Virtual Chapters. Since then each edition of the Connector has highlighted a different VC to help get out the message about the variety of eductional opporutnities that are offered. These VC profiles will continue in the coming months. I was very pleased to welcome the new DBA Fundamentals VC which is geared toward new DBAs, people who are considering entering the field and those transitioning from a different IT role. Thanks to the contributions of Erin Stellato, Michelle Nalliah and Karla Landrum we published a "Virtual Chapter Guidebook". This document includes great advice on how to build and promote a VC. It's also a reference for how things work, from budgets to webinar hosting. I think this document will be extremely valuable to all our VC leaders and am grateful to those who put it together. Board Meeting/SQL Rally The Board met in May in Dallas. Among the items discussed were Global Growth, the budget, future events and the upcoming elections. We covered a lot of ground in two days and I will again refer you to the meeting minutes for details. The meeting schedule allowed us to participate in the SQL Rally networking events and one full day of the conference. I enjoyed having the opportunity to meet and talk with many PASS members. And my hat is off to the SQL Rally organizers who put on an outstanding event. Global Growth PASS has undertaken a major intitiative to reach and engage SQL Server professionals around the world. This Global Growth plan is ambitious and will have a significant impact on the strategic direction of the organization. We have been reaching out to the community for feedback, including hosting Twitter chats and live Town Hall meetings. I co-hosted two of these events and appreciated hearing the different perspectives of the people who participated If you have not done so I encourage you to read about the Global Growth vision and proposed governance changes  and submit your feedback. FY13 Budget July 1 is the beginning of PASS' fiscal year, which makes the end of June the deadline for approving a budget. Each director submits a budget for his or her portfolio. For the Virtual Chapter portfolio I focused on how we can allocate resources to grow the VCs. Budgeting is a give-and-take process, and while I didn't get everything I asked for I'm pleased the FY13 budget includes a significant increase in financial support for the Virtual Chapters. Many people put a lot of work into the budget, but no two people deserve credit more than VP of Finance Douglas McDowell and Accounting Manager Sandy Cherry. Thanks to both of them for getting us across the goal line on time. SQL Saturday I attended SQL Saturdays in Orange Co. CA and Phoenix. It's always inspiring to see the enthusiasm in the community for learning and networking. These events are successful due to the hard work of many volunteers. Thanks to the organizers in both cities for all your efforts. Next Up This quarter we'll be gearing up plans for the VCs at the Summit and exploring ways the VCs can best support PASS' Global Growth work. I'll also be wrapping up work on the Code of Conduct and attending a Board meeting in September. And I will be at SQL Saturday #144 in Sacramento later this month. Here is the language of the Code of Conduct I have submitted to the Board for consideration: PASS Code of Conduct The PASS Summit provides database professionals from a variety of backgrounds with an opportunity to connect, share and learn.  We value the strong sense of community that characterizes this event and we seek to foster an inclusive, professional atmosphere. We are dedicated to providing a harassment-free conference experience for everyone, regardless of gender, race, sexual orientation, disability, physical appearance, religion or any other protected classification.  Everyone at the Summit is expected to follow the Code of Conduct. This includes but is not limited to: PASS Staff, Exhibitors, Speakers, Attendees and anyone affiliated with the event. Participants are expected to follow the Code of Conduct at all Summit events, including PASS-sponsored social events. Participant behavior Harassment includes, but is not limited to, offensive verbal comments related to gender, race, sexual orientation, disability, physical appearance, religion, or any other protected classification.  Intimidation, threats, stalking, harassing photography or recording, sustained disruption of talks or other events, inappropriate physical contact and unwelcome attention will also be considered harassment. Similarly, sexual, racist, derogatory, threatening or other inappropriate language and imagery are not appropriate for any conference venue, including sessions.  Recourse If a participant engages in any conduct that is prohibited under this Code of Conduct, the conference organizers may take any action they deem appropriate, including warning the offender or expelling the offender from the conference. No refunds will be granted to attendees expelled from the Summit due to violations of the Code of Conduct. If you are being harassed, witness harassment, or have any other concerns, please contact a member of conference staff immediately. Conference staff can be identified by their “Headquarters/Staff” shirts and are trained to handle the situation appropriately. A Code of Conduct Committee (CCC) made up of the Executive Manager and three members of the Board of Directors designated by the President will be authorized to take action in response to an incident or behavior that violates the Code of Conduct.

    Read the article

  • A star algorithm implementation problems

    - by bryan226
    I’m having some trouble implementing the A* algorithm in a 2D tile based game. The problem is basically that the algorithm gets stuck when something gets in its direct way (e.g. walls) Note that it only allows Horizontal and Vertical movement. Here's a picture as it works fine across the map without something in its direct way: (Green tile = destination, Blue = In closed list, Green = in open list) This is what happens if I try to walk 'around' a wall: I calculate costs with the F = G + H formula: G = 1 Cost per Step H = 10 Cost per Step //Count how many tiles are between current-tile & destination-tile The functions: short c_astar::GuessH(short Startx,short Starty,short Destinationx,short Destinationy) { hgeVector Start, Destination; Start.x = Startx; Start.y = Starty; Destination.x = Destinationx; Destination.y = Destinationy; short a = 0; short b = 0; if(Start.x > Destination.x) a = Start.x - Destination.x; else a = Destination.x - Start.x; if(Start.y > Destination.y) b = Start.y - Destination.y; else b = Destination.y - Start.y; return (a+b)*10; } short c_astar::GuessG(short Startx,short Starty,short Currentx,short Currenty) { hgeVector Start, Destination; Start.x = Startx; Start.y = Starty; Destination.x = Currentx; Destination.y = Currenty; short a = 0; short b = 0; if(Start.x > Destination.x) a = Start.x - Destination.x; else a = Destination.x - Start.x; if(Start.y > Destination.y) b = Start.y - Destination.y; else b = Destination.y - Start.y; return (a+b); } At the end of the loop I check which tile is the cheapest to go according to its F value: Then some quick checks are done for each tile (UP,DOWN,LEFT,RIGHT): //...CX are holding the F value of the TILE specified // Info: C0 = Center (Current) // C1 = UP // C2 = DOWN // C3 = LEFT // C4 = RIGHT //Quick checks if(((C1 < C2) && (C1 < C3) && (C1 < C4))) { Current.y -= 1; bSimilar = false; if(DEBUG) hge->System_Log("C1 < ALL"); } //.. same for C2,C3 & C4 If there are multiple tiles with the same F value: It’s actually a switch for DOWNLEFT,UPRIGHT.. etc. Here’s one of it: case UPRIGHT: { //UP Temporary = Current; Temporary.y -= 1; bTileStatus[0] = IsTileWalkable(Temporary.x,Temporary.y); if(bTileStatus[0]) { //Proceed normal we are OK & walkable Tilex.Tile = map.at(Temporary.y).at(Temporary.x); //Search in lists if(SearchInClosedList(Tilex.Tile.ID,C0)) bFoundInClosedList[0] = true; if(SearchInOpenList(Tilex.Tile.ID,C0)) bFoundInOpenList[0] = true; //RIGHT Temporary = Current; Temporary.x += 1; bTileStatus[1] = IsTileWalkable(Temporary.x,Temporary.y); if(bTileStatus[1]) { //Proceed normal we are OK & walkable Tilex.Tile = map.at(Temporary.y).at(Temporary.x); //Search in lists if(SearchInClosedList(Tilex.Tile.ID,C0)) bFoundInClosedList[1] = true; if(SearchInOpenList(Tilex.Tile.ID,C0)) bFoundInOpenList[1] = true; //************************************************* // Purpose: ClosedList behavior //************************************************* if(bFoundInClosedList[0] && !bFoundInClosedList[1]) { //UP found in ClosedList. Go RIGHT return RIGHT; } if(!bFoundInClosedList[0] && bFoundInClosedList[1]) { //RIGHT found in ClosedList. Go UP return UP; } if(bFoundInClosedList[0] && bFoundInClosedList[1]) { //Both found in ClosedList. Random value switch(hge->Random_Int(8,9)) { case 8: return UP; break; case 9: return RIGHT; break; } } //************************************************* // Purpose: OpenList behavior //************************************************* if(bFoundInOpenList[0] && !bFoundInOpenList[1]) { //UP found in OpenList. Go RIGHT return RIGHT; } if(!bFoundInOpenList[0] && bFoundInOpenList[1]) { //RIGHT found in OpenList. Go UP return UP; } if(bFoundInOpenList[0] && bFoundInOpenList[1]) { //Both found in OpenList. Random value switch(hge->Random_Int(8,9)) { case 8: return UP; break; case 9: return RIGHT; break; } } } else if(!bTileStatus[1]) { //RIGHT is not walkable OR out of range //Choose UP return UP; } } else if(!bTileStatus[0]) { //UP is not walkable OR out of range //Fast check RIGHT Temporary = Current; Temporary.x += 1; bTileStatus[1] = IsTileWalkable(Temporary.x,Temporary.y); if(bTileStatus[1]) { return RIGHT; } else return FAILED; //Failed, no valid path found! } } break; A log for the second picture: (Cut down to ten passes, because it’s just repeating itself) ----------------------------------------------------- PASS: 1 | C1: 211 | C2: 191 | C3: 211 | C4: 191 DOWN + RIGHT SIMILAR Going DOWN ----------------------------------------------------- PASS: 2 | C1: 200 | C2: 182 | C3: 202 | C4: 182 DOWN + RIGHT SIMILAR Going DOWN ----------------------------------------------------- PASS: 3 | C1: 191 | C2: 193 | C3: 193 | C4: 173 C4 < ALL Tile(12.000000,6.000000) not walkable. MAX_F_VALUE set. ----------------------------------------------------- PASS: 4 | C1: 182 | C2: 184 | C3: 182 | C4: 999 UP + LEFT SIMILAR Going UP Tile(12.000000,5.000000) not walkable. MAX_F_VALUE set. ----------------------------------------------------- PASS: 5 | C1: 191 | C2: 173 | C3: 191 | C4: 999 C2 < ALL Tile(12.000000,6.000000) not walkable. MAX_F_VALUE set. ----------------------------------------------------- PASS: 6 | C1: 182 | C2: 184 | C3: 182 | C4: 999 UP + LEFT SIMILAR Going UP Tile(12.000000,5.000000) not walkable. MAX_F_VALUE set. ----------------------------------------------------- PASS: 7 | C1: 191 | C2: 173 | C3: 191 | C4: 999 C2 < ALL Tile(12.000000,6.000000) not walkable. MAX_F_VALUE set. ----------------------------------------------------- PASS: 8 | C1: 182 | C2: 184 | C3: 182 | C4: 999 UP + LEFT SIMILAR Going LEFT ----------------------------------------------------- PASS: 9 | C1: 191 | C2: 193 | C3: 193 | C4: 173 C4 < ALL Tile(12.000000,6.000000) not walkable. MAX_F_VALUE set. ----------------------------------------------------- PASS: 10 | C1: 182 | C2: 184 | C3: 182 | C4: 999 UP + LEFT SIMILAR Going LEFT ----------------------------------------------------- Its always going after the cheapest F value, which seems to be wrong. If someone could point me to the right direction I'd be thankful. Regards, bryan226

    Read the article

  • What's new in EJB 3.2 ? - Java EE 7 chugging along!

    - by arungupta
    EJB 3.1 added a whole ton of features for simplicity and ease-of-use such as @Singleton, @Asynchronous, @Schedule, Portable JNDI name, EJBContainer.createEJBContainer, EJB 3.1 Lite, and many others. As part of Java EE 7, EJB 3.2 (JSR 345) is making progress and this blog will provide highlights from the work done so far. This release has been particularly kept small but include several minor improvements and tweaks for usability. More features in EJB.Lite Asynchronous session bean Non-persistent EJB Timer service This also means these features can be used in embeddable EJB container and there by improving testability of your application. Pruning - The following features were made Proposed Optional in Java EE 6 and are now made optional. EJB 2.1 and earlier Entity Bean Component Contract for CMP and BMP Client View of an EJB 2.1 and earlier Entity Bean EJB QL: Query Language for CMP Query Methods JAX-RPC-based Web Service Endpoints and Client View The optional features are moved to a separate document and as a result EJB specification is now split into Core and Optional documents. This allows the specification to be more readable and better organized. Updates and Improvements Transactional lifecycle callbacks in Stateful Session Beans, only for CMT. In EJB 3.1, the transaction context for lifecyle callback methods (@PostConstruct, @PreDestroy, @PostActivate, @PrePassivate) are defined as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Unspecified Unspecified Unspecified Unspecified Singleton Bean's transaction management type Bean's transaction management type N/A N/A In EJB 3.2, stateful session bean lifecycle callback methods can opt-in to be transactional. These methods are then executed in a transaction context as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Bean's transaction management type Bean's transaction management type Bean's transaction management type Bean's transaction management type Singleton Bean's transaction management type Bean's transaction management type N/A N/A For example, the following stateful session bean require a new transaction to be started for @PostConstruct and @PreDestroy lifecycle callback methods. @Statefulpublic class HelloBean {   @PersistenceContext(type=PersistenceContextType.EXTENDED)   private EntityManager em;    @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)   @PostConstruct   public void init() {        myEntity = em.find(...);   }   @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)    @PostConstruct    public void destroy() {        em.flush();    }} Notice, by default the lifecycle callback methods are not transactional for backwards compatibility. They need to be explicitly opt-in to be made transactional. Opt-out of passivation for stateful session bean - If your stateful session bean needs to stick around or it has non-serializable field then the bean can be opt-out of passivation as shown. @Stateful(passivationCapable=false)public class HelloBean {    private NonSerializableType ref = ... . . .} Simplified the rules to define all local/remote views of the bean. For example, if the bean is defined as: @Statelesspublic class Bean implements Foo, Bar {    . . .} where Foo and Bar have no annotations of their own, then Foo and Bar are exposed as local views of the bean. The bean may be explicitly marked @Local as @Local@Statelesspublic class Bean implements Foo, Bar {    . . .} then this is the same behavior as explained above, i.e. Foo and Bar are local views. If the bean is marked @Remote as: @Remote@Statelesspublic class Bean implements Foo, Bar {    . . .} then Foo and Bar are remote views. If an interface is marked @Local or @Remote then each interface need to be explicitly marked explicitly to be exposed as a view. For example: @Remotepublic interface Foo { . . . }@Statelesspublic class Bean implements Foo, Bar {    . . .} only exposes one remote interface Foo. Section 4.9.7 from the specification provide more details about this feature. TimerService.getAllTimers is a newly added convenience API that returns all timers in the same bean. This is only for displaying the list of timers as the timer can only be canceled by its owner. Removed restriction to obtain the current class loader, and allow to use java.io package. This is handy if you want to do file access within your beans. JMS 2.0 alignment - A standard list of activation-config properties is now defined destinationLookup connectionFactoryLookup clientId subscriptionName shareSubscriptions Tons of other clarifications through out the spec. Appendix A provide a comprehensive list of changes since EJB 3.1. ThreadContext in Singleton is guaranteed to be thread-safe. Embeddable container implement Autocloseable. A complete replay of Enterprise JavaBeans Today and Tomorrow from JavaOne 2012 can be seen here (click on CON4654_mp4_4654_001 in Media). The specification is still evolving so the actual property or method names or their actual behavior may be different from the currently proposed ones. Are there any improvements that you'd like to see in EJB 3.2 ? The EJB 3.2 Expert Group would love to hear your feedback. An Early Draft of the specification is available. The latest version of the specification can always be downloaded from here. Java EE 7 Specification Status EJB Specification Project JIRA of EJB Specification JSR Expert Group Discussion Archive These features will start showing up in GlassFish 4 Promoted Builds soon.

    Read the article

  • Cannot log to Proxmox GUI

    - by greg
    I'm running proxmox on debian 6.0.8 kernel 2.6.32-18-pve. When I try to log into the GUI with the root password, it lets me in for about 2 seconds then asks for the password again: <hostname>:8006/api2/json/cluster/resources 401 (permission denied - invalid ticket) This is not the same behavior as when I give an incorrect password. I suspect a hostname problem, but I can't sort it out. My /etc/hosts contains the following line: <ip> <shorthostname> <longhostname> pvelocalhost The folder /etc/pve/nodes contains a folder name The https certificates matches the hostname. Any idea? TIA greg

    Read the article

  • How to suppress an unwanted external Autodiscover lookup?

    - by chris
    In a small network with Exchange 2007, when starting Outlook 2010 (and once in a while afterwards), users get a prompt to confirm that it's safe to get account configuration information from cpanelemaildiscovery.cpanel.net/autodiscover/autodiscover.xml (I could read in a couple of forums that there is a bug in cpanel, but that's beside the point.) I'm puzzled because I can't find any autodiscover DNS entries anywhere, neither internally nor externally. The only hint is that we use an external hosting company for our website and for one single email address, which runs on cpanel. So I guess that Outlook makes an external DNS query to test all entries? It reates a lot of confusion for the users and frankly I'm not too happy that the external hosting company gets contacted by all our users. How can I suppress this behavior? Thanks

    Read the article

  • FIM 2010 GAL MA - It appears this forest is not exchange enabled.

    - by WooYek
    I am trying to configure a Management Agent (MA) for Global Address List (GAL) sync in FIM 2010. I cannot move to the next step from "Configure GAL" because of an error message saying "It appears this forest is not exchange enabled". Nothing I change on "Configure GAL" step is changing this behavior. I'am configuring a standalone test lab. I have a Windows 2008 R2 x64 Server, promoted to a DC and SQL 2008 SP1 installed, DNS is also running locally. I have tried to install Exchange 2010 and 2007, but there is no difference. AD MA works fine. Any ideas what did I screw up?

    Read the article

  • protobuf-net: incorrect wire-type exception deserializing Guid properties

    - by Paul Smith
    I'm having issues deserializing certain Guid properties of ORM-generated entities using protobuf-net. Here's a simplified example of the code (reproduces most elements of the scenario, but doesn't reproduce the behavior; I can't expose our internal entities, so I'm looking for clues to account for the exception). Say I have a class, Account with an AccountID read-only guid, and an AccountName read-write string. I serialize & immediately deserialize a clone. Deserializing throws an Incorrect wire-type deserializing Guid exception while deserializing. Here's example usage... Account acct = new Account() { AccountName = "Bob's Checking" }; Debug.WriteLine(acct.AccountID.ToString()); using (MemoryStream ms = new MemoryStream()) { ProtoBuf.Serializer.Serialize<Account>(ms, acct); Debug.WriteLine(Encoding.UTF8.GetString(ms.GetBuffer())); ms.Position = 0; Account clone = ProtoBuf.Serializer.Deserialize<Account>(ms); Debug.WriteLine(clone.AccountID.ToString()); } And here's an example ORM'd class (simplified, but demonstrates the relevant semantics I can think of). Uses a shell game to deserialize read-only properties by exposing the backing field ("can't write" essentially becomes "shouldn't write," but we can scan code for instances of assigning to these fields, so the hack works for our purposes). Again, this does not reproduce the exception behavior; I'm looking for clues as to what could: [DataContract()] [Serializable()] public partial class Account { public Account() { _accountID = Guid.NewGuid(); } [XmlAttribute("AccountID")] [DataMember(Name = "AccountID", Order = 1)] public Guid _accountID; /// <summary> /// A read-only property; XML, JSON and DataContract serializers all seem /// to correctly recognize the public backing field when deserializing: /// </summary> [IgnoreDataMember] [XmlIgnore] public Guid AccountID { get { return this._accountID; } } [IgnoreDataMember] protected string _accountName; [DataMember(Name = "AccountName", Order = 2)] [XmlAttribute] public string AccountName { get { return this._accountName; } set { this._accountName = value; } } } XML, JSON and DataContract serializers all seem to serialize / deserialize these object graphs just fine, so the attribute arrangement basically works. I've tried protobuf-net with lists vs. single instances, different prefix styles, etc., but still always get the 'incorrect wire-type ... Guid' exception when deserializing. So the specific questions is, is there any known explanation / workaround for this? I'm at a loss trying to trace what circumstances (in the real code but not the example) could be causing it. We hope not to have to create a protobuf dependency directly in the entity layer; if that's the case, we'll probably create proxy DTO entities with all public properties having protobuf attributes. (This is a subjective issue I have with all declarative serialization models; it's a ubiquitous pattern & I understand why it arose, but IMO, if we can put a man on the moon, then "normal" should be to have objects and serialization contracts decoupled. ;-) ) Thanks!

    Read the article

  • Weird FTP issue between Unity Express and Windows Server 2008 FTP

    - by user33975
    My VOIP specialist complained about not being able to run backups of the Unity Express onto our FTP server (Microsoft FTP on Server 2008). I did a packet trace and observed some weird behavior that I think is even kind of funny in a way. The Unity FTP client is able to initiate both control and data connections with no problem, even being able to LIST directories and CWD into them. But as soon as the client sends a SYST command (not sure why it cares), the server replies with "Windows_NT" and lo and behold...the client immediately sends a QUIT command! I've seen this happen consistently on my packet captures. I tried pointing the Unity FTP client to a FileZilla FTP server, and viola...it worked fine! Has anyone else observed this? I thought it was kinda funny, being that Cisco seems to not like Microsoft that much...

    Read the article

  • Stumped by "The remote server returned an error: (403) Forbidden" with WCF Service in https

    - by RJ
    I have a WCF Service that I have boiled down to next to nothing because of this error. It is driving me up the wall. Here's what I have now. A very simple WCF service with one method that returns a string with the value, "test". A very simple Web app that uses the service and puts the value of the string into a label. A web server running IIS 6 on Win 2003 with a SSL certificate. Other WCF services on the same server that work. I publish the WCF service to it's https location I run the web app in debug mode in VS and it works perfectly. I publish the web app to it's https location on the same server the WCF service resides under the same SSL certificate I get, "The remote server returned an error: (403) Forbidden" I have changed almost every setting in IIS as well as the WCF and Web apps to no avail. I have compared setting in the WCF services that work and everything is the same. Below are the setting in the web.config for the WCF Service and the WEB app: It appears the problem has to do with the Web app but I am out of ideas. Any ideas: WCF Service: <system.serviceModel> <bindings> <client /> <services> <service behaviorConfiguration="Ucf.Smtp.Wcf.SmtpServiceBehavior" name="Ucf.Smtp.Wcf.SmtpService"> <host> <baseAddresses> <add baseAddress="https://test.net.ucf.edu/webservices/Smtp/" /> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="Ucf.Smtp.Wcf.ISmtpService" bindingConfiguration="SSLBinding"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="Ucf.Smtp.Wcf.SmtpServiceBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" httpsHelpPageEnabled="True"/> </behavior> </serviceBehaviors> </behaviors> Web App: <system.serviceModel> <bindings><wsHttpBinding> <binding name="WSHttpBinding_ISmtpService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" establishSecurityContext="true" /> </security> </binding> <client> <endpoint address="https://net228.net.ucf.edu/webservices/smtp/SmtpService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_ISmtpService" contract="SmtpService.ISmtpService" name="WSHttpBinding_ISmtpService"> <identity> <dns value="localhost" /> </identity> </client> </system.serviceModel>

    Read the article

  • WCF app Deployed on Win7 Machine and get connection refused error

    - by Belliez
    I have created a Sync Framework application based on the following sample from microsoft and deployed it to a new Windows 7 machine for testing. The app runs ok but when I attempt to communicate I get the following error: Could not connect to http://localhost:8000/RelationalSyncContract/SqlSyncService/. TCP error code 10061: No connection could be made because the target machine actively refused it 127.0.0.1:8000. I am wondering if there is something I am missing. This is my first experience using WCF and followed microsoft sample code. I have disabled the firewall and opened port 8000 for both TCP and UDP. Not sure what to look at next. Below is my App.config file if this helps: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true"/> <httpRuntime maxRequestLength="32768" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <services> <service behaviorConfiguration="WebSyncContract.SyncServiceBehavior" name="WebSyncContract.SqlWebSyncService"> <endpoint address="" binding="wsHttpBinding" bindingConfiguration="largeMessageHttpBinding" contract="WebSyncContract.ISqlSyncContract"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> <host> <baseAddresses> <add baseAddress="http://localhost:8000/RelationalSyncContract/SqlSyncService/"/> </baseAddresses> </host> </service> </services> <bindings> <wsHttpBinding> <!-- We are using Server cert only.--> <binding name="largeMessageHttpBinding" maxReceivedMessageSize="204857600"> <readerQuotas maxArrayLength="1000000"/> </binding> </wsHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="WebSyncContract.SyncServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="True"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="True"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> <startup><supportedRuntime version="v2.0.50727"/></startup></configuration> Thank you, your help is much appreciated.

    Read the article

  • WCF security via message headers

    - by exalted
    I'm trying to implement "some sort of" server-client & zero-config security for some WCF service. The best (as well as easiest to me) solution that I found on www is the one described at http://www.dotnetjack.com/post/Automate-passing-valuable-information-in-WCF-headers.aspx (client-side) and http://www.dotnetjack.com/post/Processing-custom-WCF-header-values-at-server-side.aspx (corrisponding server-side). Below is my implementation for RequestAuth (descibed in the first link above): using System; using System.Diagnostics; using System.ServiceModel; using System.ServiceModel.Configuration; using System.ServiceModel.Dispatcher; using System.ServiceModel.Description; using System.ServiceModel.Channels; namespace AuthLibrary { /// <summary> /// Ref: http://www.dotnetjack.com/post/Automate-passing-valuable-information-in-WCF-headers.aspx /// </summary> public class RequestAuth : BehaviorExtensionElement, IClientMessageInspector, IEndpointBehavior { [DebuggerBrowsable(DebuggerBrowsableState.Never)] private string headerName = "AuthKey"; [DebuggerBrowsable(DebuggerBrowsableState.Never)] private string headerNamespace = "http://some.url"; public override Type BehaviorType { get { return typeof(RequestAuth); } } protected override object CreateBehavior() { return new RequestAuth(); } #region IClientMessageInspector Members // Keeping in mind that I am SENDING something to the server, // I only need to implement the BeforeSendRequest method public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState) { throw new NotImplementedException(); } public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel) { MessageHeader<string> header = new MessageHeader<string>(); header.Actor = "Anyone"; header.Content = "TopSecretKey"; //Creating an untyped header to add to the WCF context MessageHeader unTypedHeader = header.GetUntypedHeader(headerName, headerNamespace); //Add the header to the current request request.Headers.Add(unTypedHeader); return null; } #endregion #region IEndpointBehavior Members public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { throw new NotImplementedException(); } public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) { clientRuntime.MessageInspectors.Add(this); } public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { throw new NotImplementedException(); } public void Validate(ServiceEndpoint endpoint) { throw new NotImplementedException(); } #endregion } } So first I put this code in my client WinForms application, but then I had problems signing it, because I had to sign also all third-party references eventhough http://msdn.microsoft.com/en-us/library/h4fa028b(v=VS.80).aspx at section "What Should Not Be Strong-Named" states: In general, you should avoid strong-naming application EXE assemblies. A strongly named application or component cannot reference a weak-named component, so strong-naming an EXE prevents the EXE from referencing weak-named DLLs that are deployed with the application. For this reason, the Visual Studio project system does not strong-name application EXEs. Instead, it strong-names the Application manifest, which internally points to the weak-named application EXE. I expected VS to avoid this problem, but I had no luck there, it complained about all the unsigned references, so I created a separate "WCF Service Library" project inside my solution containing only code above and signed that one. At this point entire solution compiled just okay. And here's my problem: When I fired up "WCF Service Configuration Editor" I was able to add new behavior element extension (say "AuthExtension"), but then when I tried to add that extension to my end point behavior it gives me: Exception has been thrown by the target of an invocation. So I'm stuck here. Any ideas?

    Read the article

  • How to expose service contract interfaces with multiple inheritance in WCF service on single endpoin

    - by Vaibhav Gawali
    I have only simple data types in method signature of service (such as int, string). My service class implements single ServiceContract interface say IMathService, and this interface in turn inherits from some other base interface say IAdderService. I want to expose the MathService using interface contract IAdderService as a service on a single endpoint. However some of the clinet's which know about IMathService should be able to access the extra services provided by IMathService on that single endpoint i.e. by just typecasting IAdderService to IMathService. //Interfaces and classes at server side [ServiceContract] public interface IAdderService { [OperationContract] int Add(int num1, int num2); } [ServiceContract] public interface IMathService : IAdderService { [OperationContract] int Substract(int num1, int num2); } public class MathService : IMathService { #region IMathService Members public int Substract(int num1, int num2) { return num1 - num2; } #endregion #region IAdderService Members public int Add(int num1, int num2) { return num1 + num2; } #endregion } //Run WCF service as a singleton instace MathService mathService = new MathService(); ServiceHost host = new ServiceHost(mathService); host.Open(); Server side Configuration: <configuration> <system.serviceModel> <services> <service name="IAdderService" behaviorConfiguration="AdderServiceServiceBehavior"> <endpoint address="net.pipe://localhost/AdderService" binding="netNamedPipeBinding" bindingConfiguration="Binding1" contract="TestApp.IAdderService" /> <endpoint address="mex" binding="mexNamedPipeBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.pipe://localhost/AdderService"/> </baseAddresses> </host> </service> </services> <bindings> <netNamedPipeBinding> <binding name="Binding1" > <security mode = "None"> </security> </binding > </netNamedPipeBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="AdderServiceServiceBehavior"> <serviceMetadata /> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> Client Side imeplementation: IAdderService adderService = new ChannelFactory<IAdderService>("AdderService").CreateChannel(); int result = adderService.Add(10, 11); IMathService mathService = adderService as IMathService; result = mathService.Substract(100, 9); Client side configuration: <configuration> <system.serviceModel> <client> <endpoint name="AdderService" address="net.pipe://localhost/AdderService" binding="netNamedPipeBinding" bindingConfiguration="Binding1" contract="TestApp.IAdderService" /> </client> <bindings> <netNamedPipeBinding> <binding name="Binding1" maxBufferSize="65536" maxConnections="10"> <security mode = "None"> </security> </binding > </netNamedPipeBinding> </bindings> </system.serviceModel> </configuration> Using above code and configuration I am not able to typecast IAdderService instnace to IMathService, it fails and I get null instance of IMathService at client side. My observation is if server exposes IMathService to client then client can safely typecast to IAdderService and vice versa is also possible. However if server exposes IAdderService then the typecast fails. Is there any solution to this? or am I doing it in a wrong way.

    Read the article

  • High Tq values for HAProxy

    - by Will
    I just took over administration of a new environment. A known issue is that the environment is known for high response times (20+ seconds), so I figured I'd turn on haproxy logging and see what is going on. I figured I'd see slow load times in the app servers, but I'm actually seeing high Tq values in HAProxy. The HAProxy is on EC2 and is NOT behind ELB. Sep 5 14:22:00 haproxy-apps01 haproxy[24695]: 76.14.153.221:3371 [05/Sep/2012:14:21:49.780] http-in default_apps/fe04-c 10936/0/0/55/10991 200 488 - - ---- 111/111/0/1/0 0/0 "GET /event_times/next?callback=jQuery170189312373075111_1346854917562&_=1346854918453 HTTP/1.1" As you can see, this one has a Tq of about 10 seconds. Not all the Tq's are high (1+ seconds), but a good percentage of them are (approx 35%). Normally when I see this behavior, I'd expect there to be network issues, but this is an incredibly high percentage of visitors to be having an issue like this, so I'm wondering if anybody has seen this or have any hints on diagnosing if the issue could possibly be on this box?

    Read the article

  • Execute a SQlite command with Entity Framework

    - by Filimindji
    Hi everybody, I use a SQLite database and Entity Framework (with .net framework 3.5). I'm trying to execute a simple SQL non query command to create a new table in this datase. My Entity Framework already contains the object model for this table : I just want to generate the corresponding table using a command. (By the way, there is maybe a better way to do this. Any ideas someone :) My problem is that I'm not able to execute any command, even the simple commands. Here is my code : EntityConnection entityConnection = new EntityConnection(entitiesConnectionString); Entities db = new Entities(entityConnection); DbCommand command = db.Connection.CreateCommand(); command.CommandText ="CREATE TABLE MyTable (Id int NOT NULL, OtherTable_Id nchar(40) REFERENCES OtherTable (Id) On Delete CASCADE On Update NO ACTION, SomeData nvarchar(1024) NOT NULL, Primary Key(Id) );"; command.ExecuteNonQuery(); I got this error : System.Data.EntitySqlException: The query syntax is not valid., near identifier 'TABLE', line 1, column 8. at System.Data.Common.EntitySql.CqlParser.yyerror(String s) at System.Data.Common.EntitySql.CqlParser.yyparse() at System.Data.Common.EntitySql.CqlParser.Parse(String query) at System.Data.Common.EntitySql.CqlQuery.Parse(String query, ParserOptions parserOptions) at System.Data.Common.EntitySql.CqlQuery.Compile(String query, Perspective perspective, ParserOptions parserOptions, Dictionary`2 parameters, Dictionary`2 variables, Boolean validateTree) at System.Data.EntityClient.EntityCommand.MakeCommandTree() at System.Data.EntityClient.EntityCommand.CreateCommandDefinition() at System.Data.EntityClient.EntityCommand.TryGetEntityCommandDefinitionFromQueryCache(EntityCommandDefinition& entityCommandDefinition) at System.Data.EntityClient.EntityCommand.GetCommandDefinition() at System.Data.EntityClient.EntityCommand.InnerPrepare() at System.Data.EntityClient.EntityCommand.ExecuteReader(CommandBehavior behavior) at System.Data.EntityClient.EntityCommand.ExecuteScalar[T_Result](Func`2 resultSelector) It's seem to be a syntax error, but I can't figure where is the problem and how to resolve it. The entityConnection is ok because I can use any entities generated with EF. I tried with another simple command, but it throw another exception : DbCommand command = db.Connection.CreateCommand(); command.CommandText = "SELECT COUNT(Id) From OtherTable;"; int result = (int)command.ExecuteScalar(); And I got this error, witch is not the same, but may help : System.Data.EntitySqlException: 'Groupe' could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly., near simple identifier, line 1, column 23. at System.Data.Common.EntitySql.CqlErrorHelper.ReportIdentifierError(Expr expr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ConvertIdentifier(Expr expr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.Convert(Expr astExpr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ProcessAliasedFromClauseItem(AliasExpr aliasedExpr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ProcessFromClauseItem(FromClauseItem fromClauseItem, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ProcessFromClause(FromClause fromClause, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ConvertQuery(Expr expr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.Convert(Expr astExpr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ConvertRootExpression(Expr astExpr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ConvertGeneralExpression(Expr astExpr, SemanticResolver sr) at System.Data.Common.EntitySql.CqlQuery.AnalyzeSemantics(Expr astExpr, Perspective perspective, ParserOptions parserOptions, Dictionary`2 parameters, Dictionary`2 variables) at System.Data.Common.EntitySql.CqlQuery.Compile(String query, Perspective perspective, ParserOptions parserOptions, Dictionary`2 parameters, Dictionary`2 variables, Boolean validateTree) at System.Data.EntityClient.EntityCommand.MakeCommandTree() at System.Data.EntityClient.EntityCommand.CreateCommandDefinition() at System.Data.EntityClient.EntityCommand.TryGetEntityCommandDefinitionFromQueryCache(EntityCommandDefinition& entityCommandDefinition) at System.Data.EntityClient.EntityCommand.GetCommandDefinition() at System.Data.EntityClient.EntityCommand.InnerPrepare() at System.Data.EntityClient.EntityCommand.ExecuteReader(CommandBehavior behavior) at System.Data.EntityClient.EntityCommand.ExecuteScalar[T_Result](Func`2 resultSelector)

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >