Search Results

Search found 8288 results on 332 pages for 'proxy models'.

Page 210/332 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Java Cloud Service Integration using Web Service Data Control

    - by Jani Rautiainen
    Java Cloud Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance.This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance.In this article a custom application integrating with Fusion Application using Web Service Data Control will be implemented. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration. Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source. The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the “Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud” link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guide. For details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Create Application In this example the “JcsWsDemo” application created in the “Java Cloud Service Integration using Web Service Proxy” article is used as the base. Create Web Service Data Control In this example we will use a Web Service Data Control to integrate with Credit Rule Service in Fusion Applications. The data control will be used to query data from Fusion Applications using a web service call and present the data in a table. To generate the data control choose the “Model” project and navigate to "New -> All Technologies -> Business Tier -> Data Controls -> Web Service Data Control" and enter following: Name: CreditRuleServiceDC URL: https://ic-[POD].oracleoutsourcing.com/icCnSetupCreditRulesPublicService/CreditRuleService?WSDL Service: {{http://xmlns.oracle.com/apps/incentiveCompensation/cn/creditSetup/creditRule/creditRuleService/}CreditRuleService On step 2 select the “findRule” operation: Skip step 3 and on step 4 define the credentials to access the service. Do note that in this example these credentials are only used if testing locally, for JCS deployment credentials need to be manually updated on the EAR file: Click “Finish” and the proxy generation is done. Creating UI In order to use the data control we will need to populate complex objects FindCriteria and FindControl. For simplicity in this example we will create logic in a managed bean that populates the objects. Open “JcsWsDemoBean.java” and add the following logic: Map findCriteria; Map findControl; public void setFindCriteria(Map findCriteria) { this.findCriteria = findCriteria; } public Map getFindCriteria() { findCriteria = new HashMap(); findCriteria.put("fetchSize",10); findCriteria.put("fetchStart",0); return findCriteria; } public void setFindControl(Map findControl) { this.findControl = findControl; } public Map getFindControl() { findControl = new HashMap(); return findControl; } Open “JcsWsDemo.jspx”, navigate to “Data Controls -> CreditRuleServiceDC -> findRule(Object, Object) -> result” and drag and drop the “result” node into the “af:form” element in the page: On the “Edit Table Columns” remove all columns except “RuleId” and “Name”: On the “Edit Action Binding” window displayed enter reference to the java class created above by selecting “#{JcsWsDemoBean.findCriteria}”: Also define the value for the “findControl” by selecting “#{JcsWsDemoBean.findControl}”. Deploy to JCS For WS DC the authentication details need to be updated on the connection details before deploying. Open “connections.xml” by navigating “Application Resources -> Descriptors -> ADF META-INF -> connections.xml”: Change the user name and password entry from: <soap username="transportUserName" password="transportPassword" To match the access details for the target environment. Follow the same steps as documented in previous article ”Java Cloud Service ADF Web Application”. Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsWsDemo-ViewController-context-root/faces/JcsWsDemo.jspx When accessed the first 10 rules in the system are displayed: Summary In this article we learned how to integrate with Fusion Applications using a Web Service Data Control in JCS. In future articles various other integration techniques will be covered. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • ASP.NET MVC 3 Hosting :: MVC 2 Strongly Typed HTML Helper and Enhanced Validation Sample

    - by mbridge
    In lue of the off the official release of ASP.NET MVC 2 RTM, I decided I would put together a quick sample of the enhanced HTML.Helpers and validation controls. I am going to use my sample event site where I will have a form so a user can search for information about a certain events. So when the Search page loads the Search action is fired return my strongly typed model. to the view.    1: [HttpGet]    2: public ViewResult Search(): public ViewResult Search()    3: {    4:     IList<EventsModel> result = _eventsService.GetEventList();    5:     var viewModel = new EventSearchModel    6:                         {    7:                             EventList = new SelectList(result, "EventCode","EventName","Select Event")    8:                         };    9:     return View(viewModel);  10: } Nothing special here, although I did want to show how to load up a strongly typed drop down list because that hung me up for a little bit. So to that, I am going to pass back a SelectList to the view and my HTML helper should no how to load this. So lets take a look at the mark up for the view.    1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master"    2: Inherits="System.Web.Mvc.ViewPage<EventsSample.Models.EventSearchModel>" %>    3:     4: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">    5:     Search    6: </asp:Content>    7:     8: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">    9:   10:     <h2>Search for Events</h2>  11:   12:     <% using (Html.BeginForm("Search","Events")) {%>  13:         <%= Html.ValidationSummary(true) %>  14:          15:         <fieldset>  16:             <legend>Fields</legend>  17:              18:             <div class="editor-label">  19:                 <%= Html.LabelFor(model => model.EventNumber) %>  20:             </div>  21:             <div class="editor-field">  22:                 <%= Html.TextBoxFor(model => model.EventNumber) %>  23:                 <%= Html.ValidationMessageFor(model => model.EventNumber) %>  24:             </div>  25:              26:             <div class="editor-label">  27:                 <%= Html.LabelFor(model => model.GuestLastName) %>  28:             </div>  29:             <div class="editor-field">  30:                 <%= Html.TextBoxFor(model => model.GuestLastName) %>  31:                 <%= Html.ValidationMessageFor(model => model.GuestLastName) %>  32:             </div>  33:              34:             <div class="editor-label">  35:                 <%= Html.LabelFor(model => model.EventName) %>  36:             </div>  37:             <div class="editor-field">  38:                 <%= Html.DropDownListFor(model => model.EventName, Model.EventList,"Select Event") %>  39:                 <%= Html.ValidationMessageFor(model => model.EventName) %>  40:             </div>  41:              42:             <p>  43:                 <input type="submit" value="Save" />  44:             </p>  45:         </fieldset>  46:   47:     <% } %>  48:   49:     <div>  50:         <%= Html.ActionLink("Back to List", "Index") %>  51:     </div>  52:   53: </asp:Content> A nice feature is the scaffolding that MVC has to generate code. I simply right clicked inside my Search() action, inside the EventsController and selected “Add View” and then I selected my strongly typed object that I wanted to pass to the view and also selected that I wanted the content type be “Edit”. With that the aspx page was completely generated, although I did have to go back in and change the textbox for the Event Names to a drop down list of the names to select from. The new feature with MVC 2 are the strongly typed HTML helpers. So now, my textboxes, drop down list, and validation helpers are all strongly typed to my model.  This features gives you the benefits of intellisense and also makes it easier to debug. “The Gu” has a great post about the feature in case you want more details. The DropDownListFor function to generate the drop down list was a little tricky for me. You first need to use a Lanbda expression to pass in the property you want the selected value assigned to in your model, and then you need to pass in the list directly from the model. Validations To validate the form, you can use the strongly type validation HTML helpers which will inspect your model and return errors if the validation fails. The definitions of these rules are set directly on the Model itself so lets take a look.    1: using System.ComponentModel.DataAnnotations;    2: using System.Web.Mvc;    3:     4: namespace EventsSample.Models    5: {    6:     public class EventSearchModel    7:     {    8:         [Required(ErrorMessage = "Please enter the event number.")]    9:         [RegularExpression(@"\w{6}",  10:             ErrorMessage = "The Event Number must be 6 letters and/or numbers.")]  11:         public string EventNumber { get; set; }  12:   13:         [Required(ErrorMessage = "Please enter the guest's last name.")]  14:         [RegularExpression(@"^[A-Za-zÀ-ÖØ-öø-ÿ1-9 '\-\.]{1,22}$",  15:             ErrorMessage = "The gueest's last name must 1 to 20 characters.")]  16:         public string GuestLastName { get; set; }  17:   18:         public string EventName { get; set; }  19:         public SelectList EventList { get; set; }  20:     }  21: } Pretty cool! Okay, the only thing left to do is perform the validation in the POST action.    1: [HttpPost]    2: public ViewResult Search(EventSearchModel eventSearchModel)    3: {    4:     if (ModelState.IsValid) return View("SearchResults");    5:     else    6:     {    7:          IList<EventsModel> result = _eventsService.GetEventList();    8:         eventSearchModel.EventList = new SelectList(result, "EVentCode","EventName");   9:   10:         return View(eventSearchModel);  11:     }  12: }  13:     } If the form entries are valid, here I am simply displaying the SearchResult, but in a real world sample I would also go out get the results first. You get the idea though. In my case, when the form is not valid, I also had to reload my SelectList with the event names before I loaded the page again. Remember this is MVC, no _VieState here :) So that’s it. Now my form is validating the data and when it fails it looks like this.

    Read the article

  • ANTS Memory Profiler 7.0

    - by James Michael Hare
    I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me. Background While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program. As an example, I’d been working on a data access layer at work to call a market data web service.  This web service would take a list of symbols to quote and would return back the quote data.  To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data.  Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time. A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information.  The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage. First Impressions The main thing I’ve always loved about the ANTS tools is their ease of use.  Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required.  I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks. Not so with AMP.  The opening dialog is very straightforward.  You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc. So I chose a .NET Executable and navigated to the build location of my test harness.  Then began profiling. At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections.  At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.  Snapshots Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent). Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes.  Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer.  I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh! Analysis It looks like (from the second pie chart) that the majority of the allocations were in the string class.  This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory. I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”.  For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it. There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for. Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same.  Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control: Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends. Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage.  As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.  I was curious about this, so I selected it and went into the [Instance Categorizer].  This view let me see where these instances to RuntimeMehtodInfo were coming from. So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy. I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find.  For example, if I suspected a memory leak, I might try to filter for survivors in growing classes.  This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection.  This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them! Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them. Visual Studio Integration Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS. So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.   Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in.  It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it. Summary If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler.  It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for. I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all.  It’s built for your everyday developer to get in and find their problems quickly, and I like that! Tweet Technorati Tags: Influencers,ANTS,Memory,Profiler

    Read the article

  • Is Financial Inclusion an Obligation or an Opportunity for Banks?

    - by tushar.chitra
    Why should banks care about financial inclusion? First, the statistics, I think this will set the tone for this blog post. There are close to 2.5 billion people who are excluded from the banking stream and out of this, 2.2 billion people are from the continents of Africa, Latin America and Asia (McKinsey on Society: Global Financial Inclusion). However, this is not just a third-world phenomenon. According to Federal Deposit Insurance Corp (FDIC), in the US, post 2008 financial crisis, one family out of five has either opted out of the banking system or has been moved out (American Banker). Moving this huge unbanked population into mainstream banking is both an opportunity and a challenge for banks. An obvious opportunity is the significant untapped customer base that banks can target, so is the positive brand equity a bank can build by fulfilling its social responsibilities. Also, as banks target the cost-conscious unbanked customer, they will be forced to look at ways to offer cost-effective products and services, necessitating technology upgrades and innovations. However, cost is not the only hurdle in increasing the adoption of banking services. The potential users need to be convinced of the benefits of banking and banks will also face stiff competition from unorganized players. Finally, the banks will have to believe in the viability of this business opportunity, and not treat financial inclusion as an obligation. In what ways can banks target the unbanked For financial inclusion to be a success, banks should adopt innovative business models to develop products that address the stated and unstated needs of the unbanked population and also design delivery channels that are cost effective and viable in the long run. Through business correspondents and facilitators In rural and remote areas, one of the major hurdles in increasing banking penetration is connectivity and accessibility to banking services, which makes last mile inclusion a daunting challenge. To address this, banks can avail the services of business correspondents or facilitators. This model allows banks to establish greater connectivity through a trusted and reliable intermediary. In India, for instance, banks can leverage the local Kirana stores (the mom & pop stores) to service rural and remote areas. With a supportive nudge from the central bank, the commercial banks can enlist these shop owners as business correspondents to increase their reach. Since these neighborhood stores are acquainted with the local population, they can help banks manage the KYC norms, besides serving as a conduit for remittance. Banks also have an opportunity over a period of time to cross-sell other financial products such as micro insurance, mutual funds and pension products through these correspondents. To exercise greater operational control over the business correspondents, banks can also adopt a combination of branch and business correspondent models to deliver financial inclusion. Through mobile devices According to a 2012 world bank report on financial inclusion, out of a world population of 7 billion, over 5 billion or 70% have mobile phones and only 2 billion or 30% have a bank account. What this means for banks is that there is scope for them to leverage this phenomenal growth in mobile usage to serve the unbanked population. Banks can use mobile technology to service the basic banking requirements of their customers with no frills accounts, effectively bringing down the cost per transaction. As I had discussed in my earlier post on mobile payments, though non-traditional players have taken the lead in P2P mobile payments, banks still hold an edge in terms of infrastructure and reliability. Through crowd-funding According to the Crowdfunding Industry Report by Massolution, the global crowdfunding industry raised $2.7 billion in 2012, and is projected to grow to $5.1 billion in 2013. With credit policies becoming tighter and banks becoming more circumspect in terms of loan disbursals, crowdfunding has emerged as an alternative channel for lending. Typically, these initiatives target the unbanked population by offering small loans that are unviable for larger banks. Though a significant proportion of crowdfunding initiatives globally are run by non-banking institutions, banks are also venturing into this space. The next step towards inclusive finance Banks by themselves cannot make financial inclusion a success. There is a need for a whole ecosystem that is supportive of this mission. The policy makers, that include the regulators and government bodies, must be in sync, the IT solution providers must put on their thinking caps to come out with innovative products and solutions, communication channels such as internet and mobile need to expand their reach, and the media and the public need to play an active part. The other challenge for financial inclusion is from the banks themselves. While it is true that financial inclusion will unleash a hitherto hugely untapped market, the normal banking model may be found wanting because of issues such as flexibility, convenience and reliability. The business will be viable only when there is a focus on increasing the usage of existing infrastructure and that is possible when the banks can offer the entire range of products and services to the large number of users of essential banking services. Apart from these challenges, banks will also have to quickly master and replicate the business model to extend their reach to the remotest regions in their respective geographies. They will need to ensure that the transactions deliver a viable business benefit to the bank. For tapping cross-sell opportunities, banks will have to quickly roll-out customized and segment-specific products. The bank staff should be brought in sync with the business plan by convincing them of the viability of the business model and the need for a business correspondent delivery model. Banks, in collaboration with the government and NGOs, will have to run an extensive financial literacy program to educate the unbanked about the benefits of banking. Finally, with the growing importance of retail banking and with many unconventional players eyeing the opportunity in payments and other lucrative areas of banking, banks need to understand the importance of micro and small branches. These micro and small branches can help banks increase their presence without a huge cost burden, provide bankers an opportunity to cross sell micro products and offer a window of opportunity for the large non-banked population to transact without any interference from intermediaries. These branches can also help diminish the role of the unorganized financial sector, such as local moneylenders and unregistered credit societies. This will also help banks build a brand awareness and loyalty among the users, which by itself has a cascading effect on the business operations, especially among the rural and un-banked centers. In conclusion, with the increasingly competitive banking sector facing frequent slowdowns and downturns, the unbanked population presents a huge opportunity for banks to enhance their customer base and fulfill their social responsibility.

    Read the article

  • How accurate is "Business logic should be in a service, not in a model"?

    - by Jeroen Vannevel
    Situation Earlier this evening I gave an answer to a question on StackOverflow. The question: Editing of an existing object should be done in repository layer or in service? For example if I have a User that has debt. I want to change his debt. Should I do it in UserRepository or in service for example BuyingService by getting an object, editing it and saving it ? My answer: You should leave the responsibility of mutating an object to that same object and use the repository to retrieve this object. Example situation: class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } A comment I received: Business logic should really be in a service. Not in a model. What does the internet say? So, this got me searching since I've never really (consciously) used a service layer. I started reading up on the Service Layer pattern and the Unit Of Work pattern but so far I can't say I'm convinced a service layer has to be used. Take for example this article by Martin Fowler on the anti-pattern of an Anemic Domain Model: There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data. (...) The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it. To me, this seemed exactly what the situation was about: I advocated the manipulation of an object's data by introducing methods inside that class that do just that. However I realize that this should be a given either way, and it probably has more to do with how these methods are invoked (using a repository). I also had the feeling that in that article (see below), a Service Layer is more considered as a façade that delegates work to the underlying model, than an actual work-intensive layer. Application Layer [his name for Service Layer]: Defines the jobs the software is supposed to do and directs the expressive domain objects to work out problems. The tasks this layer is responsible for are meaningful to the business or necessary for interaction with the application layers of other systems. This layer is kept thin. It does not contain business rules or knowledge, but only coordinates tasks and delegates work to collaborations of domain objects in the next layer down. It does not have state reflecting the business situation, but it can have state that reflects the progress of a task for the user or the program. Which is reinforced here: Service interfaces. Services expose a service interface to which all inbound messages are sent. You can think of a service interface as a façade that exposes the business logic implemented in the application (typically, logic in the business layer) to potential consumers. And here: The service layer should be devoid of any application or business logic and should focus primarily on a few concerns. It should wrap Business Layer calls, translate your Domain in a common language that your clients can understand, and handle the communication medium between server and requesting client. This is a serious contrast to other resources that talk about the Service Layer: The service layer should consist of classes with methods that are units of work with actions that belong in the same transaction. Or the second answer to a question I've already linked: At some point, your application will want some business logic. Also, you might want to validate the input to make sure that there isn't something evil or nonperforming being requested. This logic belongs in your service layer. "Solution"? Following the guidelines in this answer, I came up with the following approach that uses a Service Layer: class UserController : Controller { private UserService _userService; public UserController(UserService userService){ _userService = userService; } public ActionResult MakeHimPay(string username, int amount) { _userService.MakeHimPay(username, amount); return RedirectToAction("ShowUserOverview"); } public ActionResult ShowUserOverview() { return View(); } } class UserService { private IUserRepository _userRepository; public UserService(IUserRepository userRepository) { _userRepository = userRepository; } public void MakeHimPay(username, amount) { _userRepository.GetUserByName(username).makePayment(amount); } } class UserRepository { public User GetUserByName(string name){ // Get appropriate user from database } } class User { private int debt; // debt in cents private string name; // getters public void makePayment(int cents){ debt -= cents; } } Conclusion All together not much has changed here: code from the controller has moved to the service layer (which is a good thing, so there is an upside to this approach). However this doesn't look like it had anything to do with my original answer. I realize design patterns are guidelines, not rules set in stone to be implemented whenever possible. Yet I have not found a definitive explanation of the service layer and how it should be regarded. Is it a means to simply extract logic from the controller and put it inside a service instead? Is it supposed to form a contract between the controller and the domain? Should there be a layer between the domain and the service layer? And, last but not least: following the original comment Business logic should really be in a service. Not in a model. Is this correct? How would I introduce my business logic in a service instead of the model?

    Read the article

  • Set Context User Principal for Customized Authentication in SignalR

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/27/set-context-user-principal-for-customized-authentication-in-signalr.aspxCurrently I'm working on a single page application project which is built on AngularJS and ASP.NET WebAPI. When I need to implement some features that needs real-time communication and push notifications from server side I decided to use SignalR. SignalR is a project currently developed by Microsoft to build web-based, read-time communication application. You can find it here. With a lot of introductions and guides it's not a difficult task to use SignalR with ASP.NET WebAPI and AngularJS. I followed this and this even though it's based on SignalR 1. But when I tried to implement the authentication for my SignalR I was struggled 2 days and finally I got a solution by myself. This might not be the best one but it actually solved all my problem.   In many articles it's said that you don't need to worry about the authentication of SignalR since it uses the web application authentication. For example if your web application utilizes form authentication, SignalR will use the user principal your web application authentication module resolved, check if the principal exist and authenticated. But in my solution my ASP.NET WebAPI, which is hosting SignalR as well, utilizes OAuth Bearer authentication. So when the SignalR connection was established the context user principal was empty. So I need to authentication and pass the principal by myself.   Firstly I need to create a class which delivered from "AuthorizeAttribute", that will takes the responsible for authenticate when SignalR connection established and any method was invoked. 1: public class QueryStringBearerAuthorizeAttribute : AuthorizeAttribute 2: { 3: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 4: { 5: } 6:  7: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 8: { 9: } 10: } The method "AuthorizeHubConnection" will be invoked when any SignalR connection was established. And here I'm going to retrieve the Bearer token from query string, try to decrypt and recover the login user's claims. 1: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 2: { 3: var dataProtectionProvider = new DpapiDataProtectionProvider(); 4: var secureDataFormat = new TicketDataFormat(dataProtectionProvider.Create()); 5: // authenticate by using bearer token in query string 6: var token = request.QueryString.Get(WebApiConfig.AuthenticationType); 7: var ticket = secureDataFormat.Unprotect(token); 8: if (ticket != null && ticket.Identity != null && ticket.Identity.IsAuthenticated) 9: { 10: // set the authenticated user principal into environment so that it can be used in the future 11: request.Environment["server.User"] = new ClaimsPrincipal(ticket.Identity); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } In the code above I created "TicketDataFormat" instance, which must be same as the one I used to generate the Bearer token when user logged in. Then I retrieve the token from request query string and unprotect it. If I got a valid ticket with identity and it's authenticated this means it's a valid token. Then I pass the user principal into request's environment property which can be used in nearly future. Since my website was built in AngularJS so the SignalR client was in pure JavaScript, and it's not support to set customized HTTP headers in SignalR JavaScript client, I have to pass the Bearer token through request query string. This is not a restriction of SignalR, but a restriction of WebSocket. For security reason WebSocket doesn't allow client to set customized HTTP headers from browser. Next, I need to implement the authentication logic in method "AuthorizeHubMethodInvocation" which will be invoked when any SignalR method was invoked. 1: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 2: { 3: var connectionId = hubIncomingInvokerContext.Hub.Context.ConnectionId; 4: // check the authenticated user principal from environment 5: var environment = hubIncomingInvokerContext.Hub.Context.Request.Environment; 6: var principal = environment["server.User"] as ClaimsPrincipal; 7: if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) 8: { 9: // create a new HubCallerContext instance with the principal generated from token 10: // and replace the current context so that in hubs we can retrieve current user identity 11: hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } Since I had passed the user principal into request environment in previous method, I can simply check if it exists and valid. If so, what I need is to pass the principal into context so that SignalR hub can use. Since the "User" property is all read-only in "hubIncomingInvokerContext", I have to create a new "ServerRequest" instance with principal assigned, and set to "hubIncomingInvokerContext.Hub.Context". After that, we can retrieve the principal in my Hubs through "Context.User" as below. 1: public class DefaultHub : Hub 2: { 3: public object Initialize(string host, string service, JObject payload) 4: { 5: var connectionId = Context.ConnectionId; 6: ... ... 7: var domain = string.Empty; 8: var identity = Context.User.Identity as ClaimsIdentity; 9: if (identity != null) 10: { 11: var claim = identity.FindFirst("Domain"); 12: if (claim != null) 13: { 14: domain = claim.Value; 15: } 16: } 17: ... ... 18: } 19: } Finally I just need to add my "QueryStringBearerAuthorizeAttribute" into the SignalR pipeline. 1: app.Map("/signalr", map => 2: { 3: // Setup the CORS middleware to run before SignalR. 4: // By default this will allow all origins. You can 5: // configure the set of origins and/or http verbs by 6: // providing a cors options with a different policy. 7: map.UseCors(CorsOptions.AllowAll); 8: var hubConfiguration = new HubConfiguration 9: { 10: // You can enable JSONP by uncommenting line below. 11: // JSONP requests are insecure but some older browsers (and some 12: // versions of IE) require JSONP to work cross domain 13: // EnableJSONP = true 14: EnableJavaScriptProxies = false 15: }; 16: // Require authentication for all hubs 17: var authorizer = new QueryStringBearerAuthorizeAttribute(); 18: var module = new AuthorizeModule(authorizer, authorizer); 19: GlobalHost.HubPipeline.AddModule(module); 20: // Run the SignalR pipeline. We're not using MapSignalR 21: // since this branch already runs under the "/signalr" path. 22: map.RunSignalR(hubConfiguration); 23: }); On the client side should pass the Bearer token through query string before I started the connection as below. 1: self.connection = $.hubConnection(signalrEndpoint); 2: self.proxy = self.connection.createHubProxy(hubName); 3: self.proxy.on(notifyEventName, function (event, payload) { 4: options.handler(event, payload); 5: }); 6: // add the authentication token to query string 7: // we cannot use http headers since web socket protocol doesn't support 8: self.connection.qs = { Bearer: AuthService.getToken() }; 9: // connection to hub 10: self.connection.start(); Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Use a Fake Http Channel to Unit Test with HttpClient

    - by Steve Michelotti
    Applications get data from lots of different sources. The most common is to get data from a database or a web service. Typically, we encapsulate calls to a database in a Repository object and we create some sort of IRepository interface as an abstraction to decouple between layers and enable easier unit testing by leveraging faking and mocking. This works great for database interaction. However, when consuming a RESTful web service, this is is not always the best approach. The WCF Web APIs that are available on CodePlex (current drop is Preview 3) provide a variety of features to make building HTTP REST services more robust. When you download the latest bits, you’ll also find a new HttpClient which has been updated for .NET 4.0 as compared to the one that shipped for 3.5 in the original REST Starter Kit. The HttpClient currently provides the best API for consuming REST services on the .NET platform and the WCF Web APIs provide a number of extension methods which extend HttpClient and make it even easier to use. Let’s say you have a client application that is consuming an HTTP service – this could be Silverlight, WPF, or any UI technology but for my example I’ll use an MVC application: 1: using System; 2: using System.Net.Http; 3: using System.Web.Mvc; 4: using FakeChannelExample.Models; 5: using Microsoft.Runtime.Serialization; 6:   7: namespace FakeChannelExample.Controllers 8: { 9: public class HomeController : Controller 10: { 11: private readonly HttpClient httpClient; 12:   13: public HomeController(HttpClient httpClient) 14: { 15: this.httpClient = httpClient; 16: } 17:   18: public ActionResult Index() 19: { 20: var response = httpClient.Get("Person(1)"); 21: var person = response.Content.ReadAsDataContract<Person>(); 22:   23: this.ViewBag.Message = person.FirstName + " " + person.LastName; 24: 25: return View(); 26: } 27: } 28: } On line #20 of the code above you can see I’m performing an HTTP GET request to a Person resource exposed by an HTTP service. On line #21, I use the ReadAsDataContract() extension method provided by the WCF Web APIs to serialize to a Person object. In this example, the HttpClient is being passed into the constructor by MVC’s dependency resolver – in this case, I’m using StructureMap as an IoC and my StructureMap initialization code looks like this: 1: using StructureMap; 2: using System.Net.Http; 3:   4: namespace FakeChannelExample 5: { 6: public static class IoC 7: { 8: public static IContainer Initialize() 9: { 10: ObjectFactory.Initialize(x => 11: { 12: x.For<HttpClient>().Use(() => new HttpClient("http://localhost:31614/")); 13: }); 14: return ObjectFactory.Container; 15: } 16: } 17: } My controller code currently depends on a concrete instance of the HttpClient. Now I *could* create some sort of interface and wrap the HttpClient in this interface and use that object inside my controller instead – however, there are a few why reasons that is not desirable: For one thing, the API provided by the HttpClient provides nice features for dealing with HTTP services. I don’t really *want* these to look like C# RPC method calls – when HTTP services have REST features, I may want to inspect HTTP response headers and hypermedia contained within the message so that I can make intelligent decisions as to what to do next in my workflow (although I don’t happen to be doing these things in my example above) – this type of workflow is common in hypermedia REST scenarios. If I just encapsulate HttpClient behind some IRepository interface and make it look like a C# RPC method call, it will become difficult to take advantage of these types of things. Second, it could get pretty mind-numbing to have to create interfaces all over the place just to wrap the HttpClient. Then you’re probably going to have to hard-code HTTP knowledge into your code to formulate requests rather than just “following the links” that the hypermedia in a message might provide. Third, at first glance it might appear that we need to create an interface to facilitate unit testing, but actually it’s unnecessary. Even though the code above is dependent on a concrete type, it’s actually very easy to fake the data in a unit test. The HttpClient provides a Channel property (of type HttpMessageChannel) which allows you to create a fake message channel which can be leveraged in unit testing. In this case, what I want is to be able to write a unit test that just returns fake data. I also want this to be as re-usable as possible for my unit testing. I want to be able to write a unit test that looks like this: 1: [TestClass] 2: public class HomeControllerTest 3: { 4: [TestMethod] 5: public void Index() 6: { 7: // Arrange 8: var httpClient = new HttpClient("http://foo.com"); 9: httpClient.Channel = new FakeHttpChannel<Person>(new Person { FirstName = "Joe", LastName = "Blow" }); 10:   11: HomeController controller = new HomeController(httpClient); 12:   13: // Act 14: ViewResult result = controller.Index() as ViewResult; 15:   16: // Assert 17: Assert.AreEqual("Joe Blow", result.ViewBag.Message); 18: } 19: } Notice on line #9, I’m setting the Channel property of the HttpClient to be a fake channel. I’m also specifying the fake object that I want to be in the response on my “fake” Http request. I don’t need to rely on any mocking frameworks to do this. All I need is my FakeHttpChannel. The code to do this is not complex: 1: using System; 2: using System.IO; 3: using System.Net.Http; 4: using System.Runtime.Serialization; 5: using System.Threading; 6: using FakeChannelExample.Models; 7:   8: namespace FakeChannelExample.Tests 9: { 10: public class FakeHttpChannel<T> : HttpClientChannel 11: { 12: private T responseObject; 13:   14: public FakeHttpChannel(T responseObject) 15: { 16: this.responseObject = responseObject; 17: } 18:   19: protected override HttpResponseMessage Send(HttpRequestMessage request, CancellationToken cancellationToken) 20: { 21: return new HttpResponseMessage() 22: { 23: RequestMessage = request, 24: Content = new StreamContent(this.GetContentStream()) 25: }; 26: } 27:   28: private Stream GetContentStream() 29: { 30: var serializer = new DataContractSerializer(typeof(T)); 31: Stream stream = new MemoryStream(); 32: serializer.WriteObject(stream, this.responseObject); 33: stream.Position = 0; 34: return stream; 35: } 36: } 37: } The HttpClientChannel provides a Send() method which you can override to return any HttpResponseMessage that you want. You can see I’m using the DataContractSerializer to serialize the object and write it to a stream. That’s all you need to do. In the example above, the only thing I’ve chosen to do is to provide a way to return different response objects. But there are many more features you could add to your own re-usable FakeHttpChannel. For example, you might want to provide the ability to add HTTP headers to the message. You might want to use a different serializer other than the DataContractSerializer. You might want to provide custom hypermedia in the response as well as just an object or set HTTP response codes. This list goes on. This is the just one example of the really cool features being added to the next version of WCF to enable various HTTP scenarios. The code sample for this post can be downloaded here.

    Read the article

  • IIS 7.0 informational HTTP status codes

    - by Samir R. Bhogayta
    1xx - Informational These HTTP status codes indicate a provisional response. The client computer receives one or more 1xx responses before the client computer receives a regular response. IIS 7.0 uses the following informational HTTP status codes: 100 - Continue. 101 - Switching protocols. 2xx - Success These HTTP status codes indicate that the server successfully accepted the request. IIS 7.0 uses the following success HTTP status codes: 200 - OK. The client request has succeeded. 201 - Created. 202 - Accepted. 203 - Nonauthoritative information. 204 - No content. 205 - Reset content. 206 - Partial content. 3xx - Redirection These HTTP status codes indicate that the client browser must take more action to fulfill the request. For example, the client browser may have to request a different page on the server. Or, the client browser may have to repeat the request by using a proxy server. IIS 7.0 uses the following redirection HTTP status codes: 301 - Moved permanently. 302 - Object moved. 304 - Not modified. 307 - Temporary redirect. 4xx - Client error These HTTP status codes indicate that an error occurred and that the client browser appears to be at fault. For example, the client browser may have requested a page that does not exist. Or, the client browser may not have provided valid authentication information. IIS 7.0 uses the following client error HTTP status codes: 400 - Bad request. The request could not be understood by the server due to malformed syntax. The client should not repeat the request without modifications. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 400 error: 400.1 - Invalid Destination Header. 400.2 - Invalid Depth Header. 400.3 - Invalid If Header. 400.4 - Invalid Overwrite Header. 400.5 - Invalid Translate Header. 400.6 - Invalid Request Body. 400.7 - Invalid Content Length. 400.8 - Invalid Timeout. 400.9 - Invalid Lock Token. 401 - Access denied. IIS 7.0 defines several HTTP status codes that indicate a more specific cause of a 401 error. The following specific HTTP status codes are displayed in the client browser but are not displayed in the IIS log: 401.1 - Logon failed. 401.2 - Logon failed due to server configuration. 401.3 - Unauthorized due to ACL on resource. 401.4 - Authorization failed by filter. 401.5 - Authorization failed by ISAPI/CGI application. 403 - Forbidden. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 403 error: 403.1 - Execute access forbidden. 403.2 - Read access forbidden. 403.3 - Write access forbidden. 403.4 - SSL required. 403.5 - SSL 128 required. 403.6 - IP address rejected. 403.7 - Client certificate required. 403.8 - Site access denied. 403.9 - Forbidden: Too many clients are trying to connect to the Web server. 403.10 - Forbidden: Web server is configured to deny Execute access. 403.11 - Forbidden: Password has been changed. 403.12 - Mapper denied access. 403.13 - Client certificate revoked. 403.14 - Directory listing denied. 403.15 - Forbidden: Client access licenses have exceeded limits on the Web server. 403.16 - Client certificate is untrusted or invalid. 403.17 - Client certificate has expired or is not yet valid. 403.18 - Cannot execute requested URL in the current application pool. 403.19 - Cannot execute CGI applications for the client in this application pool. 403.20 - Forbidden: Passport logon failed. 403.21 - Forbidden: Source access denied. 403.22 - Forbidden: Infinite depth is denied. 404 - Not found. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 404 error: 404.0 - Not found. 404.1 - Site Not Found. 404.2 - ISAPI or CGI restriction. 404.3 - MIME type restriction. 404.4 - No handler configured. 404.5 - Denied by request filtering configuration. 404.6 - Verb denied. 404.7 - File extension denied. 404.8 - Hidden namespace. 404.9 - File attribute hidden. 404.10 - Request header too long. 404.11 - Request contains double escape sequence. 404.12 - Request contains high-bit characters. 404.13 - Content length too large. 404.14 - Request URL too long. 404.15 - Query string too long. 404.16 - DAV request sent to the static file handler. 404.17 - Dynamic content mapped to the static file handler via a wildcard MIME mapping. 404.18 - Querystring sequence denied. 404.19 - Denied by filtering rule. 405 - Method Not Allowed. 406 - Client browser does not accept the MIME type of the requested page. 408 - Request timed out. 412 - Precondition failed. 5xx - Server error These HTTP status codes indicate that the server cannot complete the request because the server encounters an error. IIS 7.0 uses the following server error HTTP status codes: 500 - Internal server error. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 500 error: 500.0 - Module or ISAPI error occurred. 500.11 - Application is shutting down on the Web server. 500.12 - Application is busy restarting on the Web server. 500.13 - Web server is too busy. 500.15 - Direct requests for Global.asax are not allowed. 500.19 - Configuration data is invalid. 500.21 - Module not recognized. 500.22 - An ASP.NET httpModules configuration does not apply in Managed Pipeline mode. 500.23 - An ASP.NET httpHandlers configuration does not apply in Managed Pipeline mode. 500.24 - An ASP.NET impersonation configuration does not apply in Managed Pipeline mode. 500.50 - A rewrite error occurred during RQ_BEGIN_REQUEST notification handling. A configuration or inbound rule execution error occurred. Note Here is where the distributed rules configuration is read for both inbound and outbound rules. 500.51 - A rewrite error occurred during GL_PRE_BEGIN_REQUEST notification handling. A global configuration or global rule execution error occurred. Note Here is where the global rules configuration is read. 500.52 - A rewrite error occurred during RQ_SEND_RESPONSE notification handling. An outbound rule execution occurred. 500.53 - A rewrite error occurred during RQ_RELEASE_REQUEST_STATE notification handling. An outbound rule execution error occurred. The rule is configured to be executed before the output user cache gets updated. 500.100 - Internal ASP error. 501 - Header values specify a configuration that is not implemented. 502 - Web server received an invalid response while acting as a gateway or proxy. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 502 error: 502.1 - CGI application timeout. 502.2 - Bad gateway. 503 - Service unavailable. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 503 error: 503.0 - Application pool unavailable. 503.2 - Concurrent request limit exceeded.

    Read the article

  • Combining Shared Secret and Username Token – Azure Service Bus

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also flow through a username token so that in your listening WCF service you will be able to identify who sent the message. This could either be in the form of an application or a user depending on how you want to use your token. Prerequisites Before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.UN.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our username token like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element. Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the username token bits in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use userNameAuthentication and you can see that I have created my own custom username token validator.   This setup means that WCF will hand off to my class for validating the username token details. I have also added the serviceSecurityAudit element to give me a simple auditing of access capability. My UsernamePassword Validator The below picture shows you the details of the username password validator class I have implemented. WCF will hand off to this class when validating the token and give me a nice way to check the token credentials against an on-premise store. You have all of the validation features with a non-service bus WCF implementation available such as validating the username password against active directory or ASP.net membership features or as in my case above something much simpler.   The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a username token over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding. Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the username token with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This is the normal configuration to use a shared secret for accessing a Service Bus endpoint.   Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF service but this time we have also set the ClientCredentials to use the appropriate username and password which will be flown through the service bus and to our service which will validate them.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and username token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.UN.zip

    Read the article

  • Hosting and consuming WCF services without configuration files

    - by martinsj
    In this post, I'll demonstrate how to configure both the host and the client in code without the need for configuring services i the <system.serviceModel> section of the config-file. In fact, you don't need a  <system.serviceModel> section at all. What you'll do need (and want) sometimes, is the Uri of the service in the configuration file. Configuring the Uri of the the service is actually only needed for the client or when self-hosting, not when hosting in IIS. So, exactly What do we need to configure? The binding type and the binding constraints The metadata behavior Debug behavior You can of course configure even more, and even more if you want to, WCF is after all the king of configuration… As an example I'll be hosting and consuming a service that removes most of the default constraints for WCF-services, using a BasicHttpBinding. Of course, in regards to security, it is probably better to have some constraints on the server, but this is only a demonstration. The ServerConfig class in the code beneath is a static helper class that will be used in the examples. In this post, I’ll be using this helper-class for all configuration, for both the server and the client. In WCF, the  client and the server have both their own WCF-configuration. With this piece of code, they will be sharing the same configuration. 1: public static class ServiceConfig 2: { 3: public static Binding DefaultBinding 4: { 5: get 6: { 7: var binding = new BasicHttpBinding(); 8: Configure(binding); 9: return binding; 10: } 11: } 12:  13: public static void Configure(HttpBindingBase binding) 14: { 15: if (binding == null) 16: { 17: throw new ArgumentException("Argument 'binding' cannot be null. Cannot configure binding."); 18: } 19:  20: binding.SendTimeout = new TimeSpan(0, 0, 30, 0); // 30 minute timeout 21: binding.MaxBufferSize = Int32.MaxValue; 22: binding.MaxBufferPoolSize = 2147483647; 23: binding.MaxReceivedMessageSize = Int32.MaxValue; 24: binding.ReaderQuotas.MaxArrayLength = Int32.MaxValue; 25: binding.ReaderQuotas.MaxBytesPerRead = Int32.MaxValue; 26: binding.ReaderQuotas.MaxDepth = Int32.MaxValue; 27: binding.ReaderQuotas.MaxNameTableCharCount = Int32.MaxValue; 28: binding.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; 29: } 30:  31: public static ServiceMetadataBehavior ServiceMetadataBehavior 32: { 33: get 34: { 35: return new ServiceMetadataBehavior 36: { 37: HttpGetEnabled = true, 38: MetadataExporter = {PolicyVersion = PolicyVersion.Policy15} 39: }; 40: } 41: } 42:  43: public static ServiceDebugBehavior ServiceDebugBehavior 44: { 45: get 46: { 47: var smb = new ServiceDebugBehavior(); 48: Configure(smb); 49: return smb; 50: } 51: } 52:  53:  54: public static void Configure(ServiceDebugBehavior behavior) 55: { 56: if (behavior == null) 57: { 58: throw new ArgumentException("Argument 'behavior' cannot be null. Cannot configure debug behavior."); 59: } 60: 61: behavior.IncludeExceptionDetailInFaults = true; 62: } 63: } Configuring the server There are basically two ways to host a WCF service, in IIS and self-hosting. When hosting a WCF service in a production environment using SOA architecture, you'll be most likely hosting it in IIS. When testing the service in integration tests, it's very handy to be able to self-host services in the unit-tests. In fact, you can share the the WCF configuration for self-hosted services and services hosted in IIS. And that is exactly what you want to do, testing the same configurations for test and production environments.   Configuring when Self-hosting When self-hosting, in order to start the service, you'll have to instantiate the ServiceHost class, configure the  service and open it. 1: // Create the service-host. 2: var host = new ServiceHost(typeof(MyService), endpoint); 3:  4: // Configure the binding 5: host.AddServiceEndpoint(typeof(IMyService), ServiceConfig.DefaultBinding, endpoint); 6:  7: // Configure metadata behavior 8: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 9:  10: // Configure debgug behavior 11: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 12: 13: // Start listening to the service 14: host.Open(); 15:  Configuring when hosting in IIS When you create a WCF service application with the wizard in Visual Studio, you'll end up with bits and pieces of code in order to get the service running: Svc-file with codebehind. A interface to the service Web.config In order to get rid of the configuration in the <system.serviceModel> section, which the wizard has generated for us, we must tell the service that we have a factory that will create the service for us. We do this by changing the markup for the svc-file: 1: <%@ ServiceHost Language="C#" Debug="true" Service="Namespace.MyService" Factory="Namespace.ServiceHostFactory" %> The markup tells IIS that we have a factory called ServiceHostFactory for this service. The service factory has a method we can override which will be called when someone asks IIS for the service. There are overloads we can override: 1: System.ServiceModel.ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses) 2: System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 3:  In this example, we'll be using the last one, so our implementation looks like this: 1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3:  4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12:  1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3: 4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12: As you can see, we are using the same configuration helper we used when self-hosting. Now, when you have a factory, the <system.serviceModel> section of the configuration can be removed, because the section will be ignored when the service has a custom factory. If you want to configure something else in the config-file, one could configure in some other section.   Configuring the client Microsoft has helpfully created a ChannelFactory class in order to create a proxy client. When using this approach, you don't have generate those awfull proxy classes for the client. If you share the contracts with the server in it's own assembly like in the layer diagram under, you can share the same piece of code. The contracts in WCF are the interface to the service and if any, the datacontracts (custom types) the service depends on. Using the ChannelFactory with our configuration helper-class is very simple: 1: var identity = EndpointIdentity.CreateDnsIdentity("localhost"); 2: var endpointAddress = new EndpointAddress(endPoint, identity); 3: var factory = new ChannelFactory<IMyService>(DeployServiceConfig.DefaultBinding, endpointAddress); 4: using (var myService = new factory.CreateChannel()) 5: { 6: myService.Hello(); 7: } 8: factory.Close();   Happy configuration!

    Read the article

  • Combining Shared Secret and Certificates

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also combine this with certificates so that you can identify the sender of the message.   Prerequisites As in the previous article before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   Setting up the Certificates To keep the post and sample simple I am going to use the local computer store for all certificates but this bit is really just the same as setting up certificates for an example where you are using WCF without using Windows Azure Service Bus. In the sample I have included two batch files which you can use to create the sample certificates or remove them. Basically you will end up with: A certificate called PocServerCert in the personal store for the local computer which will be used by the WCF Service component A certificate called PocClientCert in the personal store for the local computer which will be used by the client application A root certificate in the Root store called PocRootCA with its associated revocation list which is the root from which the client and server certificates were created   For the sample Im just using development certificates like you would normally, and you can see exactly how these are configured and placed in the stores from the batch files in the solution using makecert and certmgr.   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.Cert.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our certificate like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element.     Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the certificate stuff in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use the clientCertificate check and also specifying the serviceCertificate with information on how to find the servers certificate in the store.     I have also added a serviceAuthorization section where I will implement my own authorization component to perform additional security checks after the service has validated that the message was signed with a good certificate. I also have the same serviceSecurityAudit configuration to log access to my service. My Authorization Manager The below picture shows you implementation of my authorization manager. WCF will eventually hand off the message to my authorization component before it calls the service code. This is where I can perform some logic to check if the identity is allowed to access resources. In this case I am simple rejecting messages from anyone except the PocClientCertificate.     The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a certificate over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding.   Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the token from a certificate with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This contains the normal transportClientEndpointBehaviour to setup the ACS shared secret configuration but we have also configured the clientCertificate to look for the PocClientCert.     Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF in exactly the normal way but the configuration will jump in and ensure that a token is passed representing the client certificate.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and certificate based token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.Cert.zip

    Read the article

  • ASP.NET MVC - dropdown list post handling problem

    - by ile
    I've had troubles for a few days already with handling form that contains dropdown list. I tried all that I've learned so far but nothing helps. This is my code: using System; using System.Collections.Generic; using System.Linq; using System.Web; using CMS; using CMS.Model; using System.ComponentModel.DataAnnotations; namespace Portal.Models { public class ArticleDisplay { public ArticleDisplay() { } public int CategoryID { set; get; } public string CategoryTitle { set; get; } public int ArticleID { set; get; } public string ArticleTitle { set; get; } public DateTime ArticleDate; public string ArticleContent { set; get; } } public class HomePageViewModel { public HomePageViewModel(IEnumerable<ArticleDisplay> summaries, Article article) { this.ArticleSummaries = summaries; this.NewArticle = article; } public IEnumerable<ArticleDisplay> ArticleSummaries { get; private set; } public Article NewArticle { get; private set; } } public class ArticleRepository { private DB db = new DB(); // // Query Methods public IQueryable<ArticleDisplay> FindAllArticles() { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID orderby article.Date descending select new ArticleDisplay { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } public IQueryable<ArticleDisplay> FindTodayArticles() { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID where article.Date == DateTime.Today select new ArticleDisplay { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } public Article GetArticle(int id) { return db.Articles.SingleOrDefault(d => d.ArticleID == id); } public IQueryable<ArticleDisplay> DetailsArticle(int id) { var result = from category in db.ArticleCategories join article in db.Articles on category.CategoryID equals article.CategoryID where id == article.ArticleID select new ArticleDisplay { CategoryID = category.CategoryID, CategoryTitle = category.Title, ArticleID = article.ArticleID, ArticleTitle = article.Title, ArticleDate = article.Date, ArticleContent = article.Content }; return result; } // // Insert/Delete Methods public void Add(Article article) { db.Articles.InsertOnSubmit(article); } public void Delete(Article article) { db.Articles.DeleteOnSubmit(article); } // // Persistence public void Save() { db.SubmitChanges(); } } } using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using Portal.Models; using CMS.Model; namespace Portal.Areas.CMS.Controllers { public class ArticleController : Controller { ArticleRepository articleRepository = new ArticleRepository(); ArticleCategoryRepository articleCategoryRepository = new ArticleCategoryRepository(); // // GET: /Article/ public ActionResult Index() { ViewData["categories"] = new SelectList ( articleCategoryRepository.FindAllCategories().ToList(), "CategoryId", "Title" ); Article article = new Article() { Date = DateTime.Now, CategoryID = 1 }; HomePageViewModel homeData = new HomePageViewModel(articleRepository.FindAllArticles().ToList(), article); return View(homeData); } // // GET: /Article/Details/5 public ActionResult Details(int id) { var article = articleRepository.DetailsArticle(id).Single(); if (article == null) return View("NotFound"); return View(article); } // // GET: /Article/Create //public ActionResult Create() //{ // ViewData["categories"] = new SelectList // ( // articleCategoryRepository.FindAllCategories().ToList(), "CategoryId", "Title" // ); // Article article = new Article() // { // Date = DateTime.Now, // CategoryID = 1 // }; // return View(article); //} // // POST: /Article/Create [ValidateInput(false)] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(Article article) { if (ModelState.IsValid) { try { // TODO: Add insert logic here articleRepository.Add(article); articleRepository.Save(); return RedirectToAction("Index"); } catch { return View(article); } } else { return View(article); } } // // GET: /Article/Edit/5 public ActionResult Edit(int id) { ViewData["categories"] = new SelectList ( articleCategoryRepository.FindAllCategories().ToList(), "CategoryId", "Title" ); var article = articleRepository.GetArticle(id); return View(article); } // // POST: /Article/Edit/5 [ValidateInput(false)] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(int id, FormCollection collection) { Article article = articleRepository.GetArticle(id); try { // TODO: Add update logic here UpdateModel(article, collection.ToValueProvider()); articleRepository.Save(); return RedirectToAction("Details", new { id = article.ArticleID }); } catch { return View(article); } } // // HTTP GET: /Article/Delete/1 public ActionResult Delete(int id) { Article article = articleRepository.GetArticle(id); if (article == null) return View("NotFound"); else return View(article); } // // HTTP POST: /Article/Delete/1 [AcceptVerbs(HttpVerbs.Post)] public ActionResult Delete(int id, string confirmButton) { Article article = articleRepository.GetArticle(id); if (article == null) return View("NotFound"); articleRepository.Delete(article); articleRepository.Save(); return View("Deleted"); } [ValidateInput(false)] public ActionResult UpdateSettings(int id, string value, string field) { // This highly-specific example is from the original coder's blog system, // but you can substitute your own code here. I assume you can pick out // which text field it is from the id. Article article = articleRepository.GetArticle(id); if (article == null) return Content("Error"); if (field == "Title") { article.Title = value; UpdateModel(article, new[] { "Title" }); articleRepository.Save(); } if (field == "Content") { article.Content = value; UpdateModel(article, new[] { "Content" }); articleRepository.Save(); } if (field == "Date") { article.Date = Convert.ToDateTime(value); UpdateModel(article, new[] { "Date" }); articleRepository.Save(); } return Content(value); } } } and view: <%@ Page Title="" Language="C#" MasterPageFile="~/Areas/CMS/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Portal.Models.HomePageViewModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Index </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <div class="naslov_poglavlja_main">Articles Administration</div> <%= Html.ValidationSummary("Create was unsuccessful. Please correct the errors and try again.") %> <% using (Html.BeginForm("Create","Article")) {%> <div class="news_forma"> <label for="Title" class="news">Title:</label> <%= Html.TextBox("Title", "", new { @class = "news" })%> <%= Html.ValidationMessage("Title", "*") %> <label for="Content" class="news">Content:</label> <div class="textarea_okvir"> <%= Html.TextArea("Content", "", new { @class = "news" })%> <%= Html.ValidationMessage("Content", "*")%> </div> <label for="CategoryID" class="news">Category:</label> <%= Html.DropDownList("CategoryId", (IEnumerable<SelectListItem>)ViewData["categories"], new { @class = "news" })%> <p> <input type="submit" value="Publish" class="form_submit" /> </p> </div> <% } %> <div class="naslov_poglavlja_main"><%= Html.ActionLink("Write new article...", "Create") %></div> <div id="articles"> <% foreach (var item in Model.ArticleSummaries) { %> <div> <div class="naslov_vijesti" id="<%= item.ArticleID %>"><%= Html.Encode(item.ArticleTitle) %></div> <div class="okvir_vijesti"> <div class="sadrzaj_vijesti" id="<%= item.ArticleID %>"><%= item.ArticleContent %></div> <div class="datum_vijesti" id="<%= item.ArticleID %>"><%= Html.Encode(String.Format("{0:g}", item.ArticleDate)) %></div> <a class="news_delete" href="#" id="<%= item.ArticleID %>">Delete</a> </div> <div class="dno"></div> </div> <% } %> </div> </asp:Content> When trying to post new article I get following error: System.InvalidOperationException: The ViewData item that has the key 'CategoryId' is of type 'System.Int32' but must be of type 'IEnumerable'. I really don't know what to do cause I'm pretty new to .net and mvc Any help appreciated! Ile EDIT: I found where I made mistake. I didn't include date. If in view form I add this line I'm able to add article: <%=Html.Hidden("Date", String.Format("{0:g}", Model.NewArticle.Date)) %> But, if I enter wrong datetype or leave title and content empty then I get the same error. In this eample there is no need for date edit, but I will need it for some other forms and validation will be necessary. EDIT 2: Error happens when posting! Call stack: App_Web_of9beco9.dll!ASP.areas_cms_views_article_create_aspx.__RenderContent2(System.Web.UI.HtmlTextWriter __w = {System.Web.UI.HtmlTextWriter}, System.Web.UI.Control parameterContainer = {System.Web.UI.WebControls.ContentPlaceHolder}) Line 31 + 0x9f bytes C#

    Read the article

  • Crash: iPhone Threading with Blocks

    - by jtbandes
    I have some convenience methods set up for threading with blocks (using PLBlocks). Then in the main portion of my code, I call the method -fetchArrivalsForLocationIDs:callback:errback:, which runs some web API calls in the background. Here's the problem: when I comment out the 2 NSAutoreleasePool-related lines in JTBlockThreading.m, of course I get lots of Object 0x6b31280 of class __NSArrayM autoreleased with no pool in place - just leaking errors. However, if I uncomment them, the app frequently crashes on the [pool release]; line, sometimes saying malloc: *** error for object 0x6e3ae10: pointer being freed was not allocated" *** set a breakpoint in malloc_error_break to debug. I assume I've made a horrible mistake/assumption in threading somewhere, but can anyone figure out what exactly the problem is? // JTBlockThreading.h #import <Foundation/Foundation.h> #import <PLBlocks/Block.h> #define JT_BLOCKTHREAD_BACKGROUND [self invokeBlockInBackground:^{ #define JT_BLOCKTHREAD_MAIN [self invokeBlockOnMainThread:^{ #define JT_BLOCKTHREAD_END }]; #define JT_BLOCKTHREAD_BACKGROUND_END_WAIT } waitUntilDone:YES]; @interface NSObject (JTBlockThreading) - (void)invokeBlockInBackground:(void (^)())block; - (void)invokeBlockOnMainThread:(void (^)())block; - (void)invokeBlockOnMainThread:(void (^)())block waitUntilDone:(BOOL)wait; - (void)invokeBlock:(void (^)())block; @end // JTBlockThreading.m #import "JTBlockThreading.h" @implementation NSObject (JTBlockThreading) - (void)invokeBlockInBackground:(void (^)())block { [self performSelectorInBackground:@selector(invokeBlock:) withObject:[block copy]]; } - (void)invokeBlockOnMainThread:(void (^)())block { [self invokeBlockOnMainThread:block waitUntilDone:NO]; } - (void)invokeBlockOnMainThread:(void (^)())block waitUntilDone:(BOOL)wait { [self performSelectorOnMainThread:@selector(invokeBlock:) withObject:[block copy] waitUntilDone:wait]; } - (void)invokeBlock:(void (^)())block { //NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; block(); [block release]; //[pool release]; } @end - (void)fetchArrivalsForLocationIDs:(NSString *)locIDs callback:(JTWSCallback)callback errback:(JTWSErrback)errback { JT_PUSH_NETWORK(); JT_BLOCKTHREAD_BACKGROUND NSError *error = nil; // Create API call URL NSURL *url = [NSURL URLWithString: [NSString stringWithFormat:@"%@/arrivals/appID/%@/locIDs/%@", TRIMET_BASE_URL, appID, locIDs]]; if (!url) { JT_BLOCKTHREAD_MAIN errback(@"That’s not a valid Stop ID!"); JT_POP_NETWORK(); JT_BLOCKTHREAD_END return; } // Call API NSData *data = [NSData dataWithContentsOfURL:url options:0 error:&error]; if (!data) { JT_BLOCKTHREAD_MAIN errback([NSString stringWithFormat: @"Had trouble downloading the arrival data! %@", [error localizedDescription]]); JT_POP_NETWORK(); JT_BLOCKTHREAD_END return; } CXMLDocument *doc = [[CXMLDocument alloc] initWithData:data options:0 error:&error]; if (!doc) { JT_BLOCKTHREAD_MAIN // TODO: further error description // (TouchXML doesn't provide information with the error) errback(@"Had trouble reading the arrival data!"); JT_POP_NETWORK(); JT_BLOCKTHREAD_END return; } NSArray *nodes = nil; CXMLElement *resultSet = [doc rootElement]; // Begin building the response model JTWSResponseArrivalData *response = [[[JTWSResponseArrivalData alloc] init] autorelease]; response.queryTime = [NSDate JT_dateWithTriMetWSTimestamp: [[resultSet attributeValueForName:@"queryTime"] longLongValue]]; if (!response.queryTime) { // TODO: further error check? NSLog(@"Hm, query time is nil in %s... response %@, resultSet %@", __PRETTY_FUNCTION__, response, resultSet); } nodes = [resultSet nodesForXPath:@"//arrivals:errorMessage" namespaceMappings:namespaceMappings error:&error]; if ([nodes count] > 0) { NSString *message = [[nodes objectAtIndex:0] stringValue]; response.errorMessage = message; // TODO: this probably won't be used... JT_BLOCKTHREAD_MAIN errback([NSString stringWithFormat: @"TriMet error: “%@”", message]); JT_POP_NETWORK(); JT_BLOCKTHREAD_END return; } // Build location models nodes = [resultSet nodesForXPath:@"/arrivals:location" namespaceMappings:namespaceMappings error:&error]; if ([nodes count] <= 0) { NSLog(@"Hm, no locations returned in %s... xpath error %@, response %@, resultSet %@", __PRETTY_FUNCTION__, error, response, resultSet); } NSMutableArray *locations = [NSMutableArray arrayWithCapacity:[nodes count]]; for (CXMLElement *loc in nodes) { JTWSLocation *location = [[[JTWSLocation alloc] init] autorelease]; location.desc = [loc attributeValueForName:@"desc"]; location.dir = [loc attributeValueForName:@"dir"]; location.position = [[[CLLocation alloc] initWithLatitude:[[loc attributeValueForName:@"lat"] doubleValue] longitude:[[loc attributeValueForName:@"lng"] doubleValue]] autorelease]; location.locID = [[loc attributeValueForName:@"locid"] integerValue]; } // Build arrival models nodes = [resultSet nodesForXPath:@"/arrivals:arrival" namespaceMappings:namespaceMappings error:&error]; if ([nodes count] <= 0) { NSLog(@"Hm, no arrivals returned in %s... xpath error %@, response %@, resultSet %@", __PRETTY_FUNCTION__, error, response, resultSet); } NSMutableArray *arrivals = [[NSMutableArray alloc] initWithCapacity:[nodes count]]; for (CXMLElement *arv in nodes) { JTWSArrival *arrival = [[JTWSArrival alloc] init]; arrival.block = [[arv attributeValueForName:@"block"] integerValue]; arrival.piece = [[arv attributeValueForName:@"piece"] integerValue]; arrival.locID = [[arv attributeValueForName:@"locid"] integerValue]; arrival.departed = [[arv attributeValueForName:@"departed"] boolValue]; // TODO: verify arrival.detour = [[arv attributeValueForName:@"detour"] boolValue]; // TODO: verify arrival.direction = (JTWSRouteDirection)[[arv attributeValueForName:@"dir"] integerValue]; arrival.estimated = [NSDate JT_dateWithTriMetWSTimestamp: [[arv attributeValueForName:@"estimated"] longLongValue]]; arrival.scheduled = [NSDate JT_dateWithTriMetWSTimestamp: [[arv attributeValueForName:@"scheduled"] longLongValue]]; arrival.fullSign = [arv attributeValueForName:@"fullSign"]; arrival.shortSign = [arv attributeValueForName:@"shortSign"]; NSString *status = [arv attributeValueForName:@"status"]; if ([status isEqualToString:@"estimated"]) { arrival.status = JTWSArrivalStatusEstimated; } else if ([status isEqualToString:@"scheduled"]) { arrival.status = JTWSArrivalStatusScheduled; } else if ([status isEqualToString:@"delayed"]) { arrival.status = JTWSArrivalStatusDelayed; } else if ([status isEqualToString:@"canceled"]) { arrival.status = JTWSArrivalStatusCanceled; } else { NSLog(@"Unknown arrival status %s in %@... response %@, arrival %@", status, __PRETTY_FUNCTION__, response, arv); } NSArray *blockPositions = [arv nodesForXPath:@"/arrivals:blockPosition" namespaceMappings:namespaceMappings error:&error]; if ([blockPositions count] > 1) { // The schema allows for any number of blockPosition elements, // but I'm really not sure why... NSLog(@"Hm, more than one blockPosition in %s... response %@, arrival %@", __PRETTY_FUNCTION__, response, arv); } if ([blockPositions count] > 0) { CXMLElement *bpos = [blockPositions objectAtIndex:0]; JTWSBlockPosition *blockPosition = [[JTWSBlockPosition alloc] init]; blockPosition.reported = [NSDate JT_dateWithTriMetWSTimestamp: [[bpos attributeValueForName:@"at"] longLongValue]]; blockPosition.feet = [[bpos attributeValueForName:@"feet"] integerValue]; blockPosition.position = [[[CLLocation alloc] initWithLatitude:[[bpos attributeValueForName:@"lat"] doubleValue] longitude:[[bpos attributeValueForName:@"lng"] doubleValue]] autorelease]; NSString *headingStr = [bpos attributeValueForName:@"heading"]; if (headingStr) { // Valid CLLocationDirections are > 0 CLLocationDirection heading = [headingStr integerValue]; while (heading < 0) heading += 360.0; blockPosition.heading = heading; } else { blockPosition.heading = -1; // indicates invalid heading } NSArray *tripData = [bpos nodesForXPath:@"/arrivals:trip" namespaceMappings:namespaceMappings error:&error]; NSMutableArray *trips = [[NSMutableArray alloc] initWithCapacity:[tripData count]]; for (CXMLElement *tripDatum in tripData) { JTWSTrip *trip = [[JTWSTrip alloc] init]; trip.desc = [tripDatum attributeValueForName:@"desc"]; trip.destDist = [[tripDatum attributeValueForName:@"destDist"] integerValue]; trip.direction = (JTWSRouteDirection)[[tripDatum attributeValueForName:@"dir"] integerValue]; trip.pattern = [[tripDatum attributeValueForName:@"pattern"] integerValue]; trip.progress = [[tripDatum attributeValueForName:@"progress"] integerValue]; trip.route = [[tripDatum attributeValueForName:@"route"] integerValue]; [trips addObject:trip]; [trip release]; } blockPosition.trips = trips; [trips release]; NSArray *layoverData = [bpos nodesForXPath:@"/arrivals:layover" namespaceMappings:namespaceMappings error:&error]; NSMutableArray *layovers = [[NSMutableArray alloc] initWithCapacity:[layoverData count]]; for (CXMLElement *layoverDatum in layoverData) { JTWSLayover *layover = [[JTWSLayover alloc] init]; layover.start = [NSDate JT_dateWithTriMetWSTimestamp: [[layoverDatum attributeValueForName:@"start"] longLongValue]]; layover.end = [NSDate JT_dateWithTriMetWSTimestamp: [[layoverDatum attributeValueForName:@"end"] longLongValue]]; // TODO: it seems the API can send a <location> inside a layover (undocumented)... support? [layovers addObject:layover]; [layover release]; } blockPosition.layovers = layovers; [layovers release]; arrival.blockPosition = blockPosition; [blockPosition release]; } [arrivals addObject:arrival]; [arrival release]; } // Add arrivals to corresponding locations for (JTWSLocation *loc in locations) { loc.arrivals = [arrivals selectWithBlock:^BOOL (id arv) { return loc.locID == ((JTWSArrival *)arv).locID; }]; } [arrivals release]; response.locations = locations; [locations release]; // Build route status models (used in inclement weather) nodes = [resultSet nodesForXPath:@"/arrivals:routeStatus" namespaceMappings:namespaceMappings error:&error]; NSMutableArray *routeStatuses = [NSMutableArray arrayWithCapacity:[nodes count]]; for (CXMLElement *stat in nodes) { JTWSRouteStatus *status = [[JTWSRouteStatus alloc] init]; status.route = [[stat attributeValueForName:@"route"] integerValue]; NSString *statusStr = [stat attributeValueForName:@"status"]; if ([statusStr isEqualToString:@"estimatedOnly"]) { status.status = JTWSRouteStatusTypeEstimatedOnly; } else if ([statusStr isEqualToString:@"off"]) { status.status = JTWSRouteStatusTypeOff; } else { NSLog(@"Unknown route status type %s in %@... response %@, routeStatus %@", status, __PRETTY_FUNCTION__, response, stat); } [routeStatuses addObject:status]; [status release]; } response.routeStatuses = routeStatuses; [routeStatuses release]; JT_BLOCKTHREAD_MAIN callback(response); JT_POP_NETWORK(); JT_BLOCKTHREAD_END JT_BLOCKTHREAD_END }

    Read the article

  • Cannot Create New Team Project TFS2010 TF249063 TF218017

    - by Kodicus
    Server: Windows 2008 R2 Standard Team Foundation Server 2010 WSS 3.0 TFS Configuration: Single Server instalation (including SharePoint) The following error occurs when trying to create a new team project from my local machine. The ://sourcecontrol site and ://sourcecontrol/sites/DefaultCollection/ site appears to be functioning fine and my user is a Site collection administrator on both. I can navigate both sites through a browser on my local machine. Thanks for your help! 2010-04-23T10:01:42 | Module: Internal | Team Foundation Server proxy retrieved | Completion time: 0 seconds 2010-04-23T10:01:42 | Module: Wizard | Retrieved IAuthorizationService proxy | Completion time: 0 seconds 2010-04-23T10:01:42 | Module: Wizard | TF30227: Project creation permissions retrieved | Completion time: 0.109382 seconds 2010-04-23T10:01:42 | Module: Internal | The template information for Team Foundation Server "sourcecontrol\DefaultCollection" was retrieved from the Team Foundation Server. | Completion time: 0.15626 seconds ---begin Exception entry--- Time: 2010-04-23T10:03:24 Module: Wizard Exception Message: TF218017: A SharePoint site could not be created for use as the team project portal. The following error occurred: TF249063: The following Web service is not available: ://sourcecontrol/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: ://sourcecontrol. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerException) Exception Stack Trace: at Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.CheckCreateSite(TfsTeamProjectCollection tfsServer, Uri adminUri, Uri siteUri) at Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.ValidateSettings(ProjectCreationContext context) at Microsoft.VisualStudio.TeamFoundation.PortfolioProjectForm.OnFinish() Inner Exception Details: Exception Message: TF249063: The following Web service is not available: ://sourcecontrol/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: ://sourcecontrol. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServiceUnavailableException) Exception Stack Trace: at Microsoft.TeamFoundation.Client.SharePoint.SharePointTeamFoundationIntegrationService.HandleException(Exception e) at Microsoft.TeamFoundation.Client.SharePoint.SharePointTeamFoundationIntegrationService.CheckUrl(String absolutePath, CheckUrlOptions options, Guid configurationServerId, Guid projectCollectionId) at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.CheckUrl(ICredentials credentials, Uri adminUrl, Uri siteUrl, CheckUrlOptions options, Guid configurationServerId, Guid projectCollectionId) at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.CheckCreateSite(TfsConnection tfs, Uri adminUrl, Uri siteUrl) at Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.CheckCreateSite(TfsTeamProjectCollection tfsServer, Uri adminUri, Uri siteUri) Inner Exception Details: Exception Message: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. (type WebException) Exception Stack Trace: at System.Net.WebRequest.GetResponse() at Microsoft.TeamFoundation.Client.TeamFoundationClientProxyBase.AsyncWebRequest.ExecRequest(Object obj) Inner Exception Details: Exception Message: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. (type IOException) Exception Stack Trace: at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(WebRequest request, Boolean userRetrievedStream, Boolean probeRead) Inner Exception Details: Exception Message: An existing connection was forcibly closed by the remote host (type SocketException) Exception Stack Trace: at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- end Exception entry ---

    Read the article

  • Unable to import Maven project into IntelliJ IDEA

    - by del
    I'm having problems importing any Maven projects into IntelliJ IDEA. I create an empty Maven project like this: $ mvn archetype:generate -DgroupId=com.mycompany.app -DartifactId=my-app -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false Then I try to open the project in IDEA (File Open Project, then choose the pom.xml). A progress box saying "Reading pom.xml" displays for a few minutes, and then just disappears without opening the project. Looking in the IDEA log, I see some connection timeout exceptions like this: 2012-10-03 11:55:55,483 [ 16981] INFO - ution.rmi.RemoteProcessSupport - Port/ID: 18011/Maven2ServerImpl9407569f 2012-10-03 11:56:58,898 [ 80396] WARN - ution.rmi.RemoteProcessSupport - The cook failed to start due to java.net.ConnectException: Connection timed out 2012-10-03 11:57:55,483 [ 136981] WARN - ution.rmi.RemoteProcessSupport - java.rmi.NotBoundException: _DEAD_HAND_ 2012-10-03 11:57:55,484 [ 136982] WARN - ution.rmi.RemoteProcessSupport - at sun.rmi.registry.RegistryImpl.lookup(RegistryImpl.java:106) 2012-10-03 11:57:55,484 [ 136982] WARN - ution.rmi.RemoteProcessSupport - at com.intellij.execution.rmi.RemoteServer.start(RemoteServer.java:73) 2012-10-03 11:57:55,484 [ 136982] WARN - ution.rmi.RemoteProcessSupport - at org.jetbrains.idea.maven.server.RemoteMavenServer.main(RemoteMavenServer.java:22) 2012-10-03 11:58:01,749 [ 143247] ERROR - com.intellij.ide.IdeEventQueue - Error during dispatching of java.awt.event.MouseEvent[MOUSE_RELEASED,(65,116),absolute(64,140),button=1,modifiers=Button1,clickCount=1] on frame0 java.lang.RuntimeException: Cannot reconnect. at org.jetbrains.idea.maven.server.RemoteObjectWrapper.perform(RemoteObjectWrapper.java:82) at org.jetbrains.idea.maven.server.MavenServerManager.applyProfiles(MavenServerManager.java:311) at org.jetbrains.idea.maven.project.MavenProjectReader.applyProfiles(MavenProjectReader.java:369) at org.jetbrains.idea.maven.project.MavenProjectReader.doReadProjectModel(MavenProjectReader.java:98) at org.jetbrains.idea.maven.project.MavenProjectReader.readProject(MavenProjectReader.java:52) at org.jetbrains.idea.maven.project.MavenProject.read(MavenProject.java:405) at org.jetbrains.idea.maven.project.MavenProjectsTree.doUpdate(MavenProjectsTree.java:534) at org.jetbrains.idea.maven.project.MavenProjectsTree.doAdd(MavenProjectsTree.java:481) at org.jetbrains.idea.maven.project.MavenProjectsTree.update(MavenProjectsTree.java:442) at org.jetbrains.idea.maven.project.MavenProjectsTree.updateAll(MavenProjectsTree.java:413) at org.jetbrains.idea.maven.wizards.MavenProjectBuilder.readMavenProjectTree(MavenProjectBuilder.java:198) at org.jetbrains.idea.maven.wizards.MavenProjectBuilder.access$800(MavenProjectBuilder.java:44) at org.jetbrains.idea.maven.wizards.MavenProjectBuilder$3.run(MavenProjectBuilder.java:179) at org.jetbrains.idea.maven.utils.MavenUtil$8.run(MavenUtil.java:388) at com.intellij.openapi.progress.impl.ProgressManagerImpl$TaskRunnable.run(ProgressManagerImpl.java:469) at com.intellij.openapi.progress.impl.ProgressManagerImpl$6.run(ProgressManagerImpl.java:288) at com.intellij.openapi.progress.impl.ProgressManagerImpl$2.run(ProgressManagerImpl.java:178) at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:218) at com.intellij.openapi.progress.impl.ProgressManagerImpl.runProcess(ProgressManagerImpl.java:169) at com.intellij.openapi.application.impl.ApplicationImpl$8$1.run(ApplicationImpl.java:641) at com.intellij.openapi.application.impl.ApplicationImpl$6.run(ApplicationImpl.java:434) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) at com.intellij.openapi.application.impl.ApplicationImpl$1$1.run(ApplicationImpl.java:145) Caused by: java.rmi.RemoteException: Cannot start maven service; nested exception is: java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection timed out at org.jetbrains.idea.maven.server.MavenServerManager.create(MavenServerManager.java:120) at org.jetbrains.idea.maven.server.MavenServerManager.create(MavenServerManager.java:71) at org.jetbrains.idea.maven.server.RemoteObjectWrapper.getOrCreateWrappee(RemoteObjectWrapper.java:41) at org.jetbrains.idea.maven.server.MavenServerManager$8.execute(MavenServerManager.java:314) at org.jetbrains.idea.maven.server.MavenServerManager$8.execute(MavenServerManager.java:311) at org.jetbrains.idea.maven.server.RemoteObjectWrapper.perform(RemoteObjectWrapper.java:76) ... 27 more Caused by: java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection timed out at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601) at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198) at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322) at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source) at com.intellij.execution.rmi.RemoteProcessSupport$2.compute(RemoteProcessSupport.java:215) at com.intellij.execution.rmi.RemoteUtil.executeWithClassLoader(RemoteUtil.java:122) at com.intellij.execution.rmi.RemoteProcessSupport.acquire(RemoteProcessSupport.java:212) at com.intellij.execution.rmi.RemoteProcessSupport.acquire(RemoteProcessSupport.java:133) at org.jetbrains.idea.maven.server.MavenServerManager.create(MavenServerManager.java:117) ... 32 more Caused by: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at java.net.Socket.connect(Socket.java:478) at java.net.Socket.(Socket.java:375) at java.net.Socket.(Socket.java:189) at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22) at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128) at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595) ... 41 more I'm using the latest versions of IDEA (11.1.3) and Maven (3.0.4). Any ideas what I am doing wrong?

    Read the article

  • WCF GZip Compression Request/Response Processing

    - by IanT8
    How do I get a WCF client to process server responses which have been GZipped or Deflated by IIS? On IIS, I've followed the instructions here on how to make IIS 6 gzip all responses (where the request contained "Accept-Encoding: gzip, deflate") emitted by .svc wcf services. On the client, I've followed the instructions here and here on how to inject this header into the web request: "Accept-Encoding: gzip, deflate". Fiddler2 shows the response is binary and not plain old Xml. The client crashes with an exception which basically says there's no Xml header, which ofcourse is true. In my IClientMessageInspector, the app crashes before AfterReceiveReply is called. Some further notes: (1) I can't change the WCF service or client as they are supplied by a 3rd party. I can however attach behaviors and/or message inspectors via configuration if this is the right direction to take. (2) I don't want to compress/uncompress just the soap body, but the entire message. Any ideas/solutions? * SOLVED * It was not possible to write a WCF extension to achieve these goals. Instead I followed this CodeProject article which advocate a helper class: public class CompressibleHttpRequestCreator : IWebRequestCreate { public CompressibleHttpRequestCreator() { } WebRequest IWebRequestCreate.Create(Uri uri) { HttpWebRequest httpWebRequest = Activator.CreateInstance(typeof(HttpWebRequest), BindingFlags.CreateInstance | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance, null, new object[] { uri, null }, null) as HttpWebRequest; if (httpWebRequest == null) { return null; } httpWebRequest.AutomaticDecompression =DecompressionMethods.GZip | DecompressionMethods.Deflate; return httpWebRequest; } } and also, an addition to the application configuration file: <configuration> <system.net> <webRequestModules> <remove prefix="http:"/> <add prefix="http:" type="Pajocomo.Net.CompressibleHttpRequestCreator, Pajocomo" /> </webRequestModules> </system.net> </configuration> What seems to be happening is that WCF eventually asks some factory or other deep down in system.net to provide an HttpWebRequest instance, and we provide the helper that will be asked to create the required instance. In the WCF client configuration file, a simple basicHttpBinding is all that is required, without the need for any custom extensions. When the application runs, the client Http request contains the header "Accept-Encoding: gzip, deflate", the server returns a gzipped web response, and the client transparently decompresses the http response before handing it over to WCF. When I tried to apply this technique to Web Services I found that it did NOT work. Although the helper class was executed in the same was as when used by the WCF client, the http request did not contain the "Accept-Encoding: ..." header. To make this work for Web Services, I had to edit the Web Proxy class, and add this method: protected override System.Net.WebRequest GetWebRequest(Uri uri) { System.Net.HttpWebRequest rq = (System.Net.HttpWebRequest)base.GetWebRequest(uri); rq.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate; return rq; } Note that it did not matter whether the CompressibleHttpRequestCreator and block from the application config file were present or not. For web services, only overriding GetWebRequest in the Web Service Proxy worked.

    Read the article

  • Event 4098, 0x80070533 Logon failure: account currently disabled?

    - by Josh King
    Having started to upgrade our PCs to Windows 7 we have noticed that we are getting group policy warnings in Event Viewer such as: "The user 'Word.qat' preference item in the 'a_Office2007_Users {A084A37B-6D4C-41C0-8AF7-B891B87FC53B}' Group Policy object did not apply because it failed with error code '0x80070533 Logon failure: account currently disabled.' This error was suppressed." 15 of these warnings appear every two hours on every Windows 7 PC, most of which are to do with core office applications and two are for plug-ins to out document management system. These warnings aren't afecting the users, but it would be nice to track down the source of them before we rollout Win7 to the rest of the Organisation. Any ideas as to where the login issue could be comming from (All users are connecting to the domain and proxy, etc fine)?

    Read the article

  • SSRS ReportViewer 2010 Iframe IE Problem

    - by Phil
    Hello all, My problem relates to trying to include an SSRS (SQL Server) Report inside my MVC application. I've settled on the hybrid solution of having a WebForm with the ReportViewer Control in and then on my MVC View pages having an iframe reference this WebForm page. The tricky part is that the iframe needs to be dynamically populated with the report rather than using src due to posting parameters to the WebForm. It works perfectly in Firefox and Chrome, however IE throws a "Sys is Undefined" javascript error. Using src on the iframe works in IE, but I can't find a way to post parameters (don't want to use something like /Report.aspx?param1=test due to the possible length). Its a ASP.NET MVC 2 project, .NET 4, Visual Studio 2010 on Windows 7 x74 if its any help. So here is the code (I could provide the VS2010 project files if anyone wants them) In my webform: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Report.aspx.cs" Inherits="SSRSMVC.Views.Home.Report" %> <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <body> <form id="RSForm" runat="server"> <asp:ScriptManager ID="ScriptManager" runat="server" EnablePartialRendering="true" ScriptMode="Release"> </asp:ScriptManager> <asp:UpdatePanel ID="ReportViewerUP" runat="server"> <ContentTemplate> <rsweb:ReportViewer ID="ReportViewer" runat="server" Width="100%" Height="380px" ProcessingMode="Local" InteractivityPostBackMode="AlwaysAsynchronous" AsyncRendering="true"> <LocalReport ReportPath="Models\\TestReport.rdlc"> </LocalReport> </rsweb:ReportViewer> </ContentTemplate> </asp:UpdatePanel> </form> </body> </html> And Codebehind: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using Microsoft.Reporting.WebForms; namespace SSRSMVC.Views.Home { public partial class Report : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { string test = Request.Params["val1"]; ReportViewer.LocalReport.DataSources.Add(new ReportDataSource("DataSet1", new SSRSMVC.Models.DataProvider().GetData())); } } } } And lastly my View page, <script type="text/javascript"> $(window).load(function () { $.post('/Report.aspx', { val1: "Hello World" }, function (data) { var rv_frame = document.getElementById('Frame1'); rv_frame = (rv_frame.contentWindow) ? rv_frame.contentWindow : (rv_frame.contentDocument.document) ? rv_frame.contentDocument.document : rv_frame.contentDocument; rv_frame.document.open(); rv_frame.document.write(data); rv_frame.document.close(); }); }); </script>

    Read the article

  • HTTP response time profiling

    - by Sparsh Gupta
    Hello I have a nginx reverse proxy. The server is close to serving 600-700 requests per second. I have a Munin HTTP load time plugin which is outputting this: http://monitor.wingify.com/munin/visualwebsiteoptimizer.com/lb1.visualwebsiteoptimizer.com-http_loadtime.html Now, the problem is I am seeing some spikes in the graph. Expected response times should always be under 200ms. I am keeping an eye on syslog and messages but I am unable to figure out the actual cause of this. I was wondering if there is any good HTTP response time profiling system which I can install / embed with this nginx server and get a detailed reports / logs on the breakup of time taken by different things and what exactly is the cause of the spikes. The profiling system would also help me understand bottlenecks and how can I further optimize the latency. Most important right now is to investigate the cause of the spikes in the HTTP load time graphs (similar pattern is reported by external monitors - Pingdom) and to fix it to get consistent response times Thanks

    Read the article

  • Apache failover for JBoss

    - by DaveB
    I am running a JBoss web app (AS 6 Final) hosted on linux (Debian). I would like to implement a failover solution so that when JBoss is down, a static web page is served in its place. My current solution is to run Apache as a reverse proxy (described here), which allows me to serve .php files from apache and forward all other requests to JBoss. But I am not sure how make Apache step in when JBoss is down? Note. both apache and jboss will be running on the same box, this is (Application failover rather then server failover) to cover times when JBoss is re-deploying etc. So I am looking for the simplest solution really Many thanks

    Read the article

  • Nginx compiled --with-http_spdy_module yet raise errors complains ngx_http_spdy_module

    - by c19
    [emerg] 21101#0: the "spdy" parameter requires ngx_http_spdy_module in /etc/nginx/conf.d/cc.conf isn't it the same module? and it causes multi-redirection error too. I have no idea what is going on. Full configure arg: nginx version: nginx/1.4.2 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-pcre --with-http_ssl_module `--with-http_spdy_module` --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --with-openssl=/usr/local/src/openssl-1.0.1e

    Read the article

  • Windows 7: how to let AutoConfigURL change be effective immediately?

    - by Jeroen Pluimers
    I want to set both the script and the "Use automatic configuration script" checkbox in Windows 7 using a script that immediately affects anything using wininet.dll (Internet Explorer, Chrome, .NET apps, etc). I've done a before/after registry diff, and tried doing this through the registry, but things like the reg files below require the apps to be restarted. Chaning the settings by pressing the "LAN Settings" button through this Control panel shortcut has immediate affect: %windir%\system32\control.exe Inetcpl.cpl,,4 How can I script this immediate effect? These are the registry script files: [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings] “AutoConfigURL”=”http://www.company.com/proxy.pac” (registry scripts borrowed from http://danny.quantumspark.net/wordpress/) [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings] “AutoConfigURL”=- --jeroen (who is checking out the notify function at superuser.com)

    Read the article

  • Squid, NTLM, Windows 7 and IE8

    - by Harley
    I'm running Squid 2.7-stable4, Samba 3 and the Windows 7 RC with IE8. I have NTLM authentication setup on my squid proxy server and it works fine for every combination of browser and Windows (including IE8 on XP and Firefox on Win7), but it doesn't work (keeps asking for authentication) for IE8 on Windows 7. I can get it to work using the LmCompatibilityLevel registry hack, but I'd really prefer to get it working on the server. Does anyone have any experience with this? Or know where to start looking? The samba logs don't reveal much. EDIT: Here's what the wb-MYDOMAIN log says when I attempt to authenticate: [2009/08/20 15:13:36, 4] nsswitch/winbindd_dual.c:fork_domain_child(1080) child daemon request 13 [2009/08/20 15:13:36, 10] nsswitch/winbindd_dual.c:child_process_request(478) process_request: request fn AUTH_CRAP [2009/08/20 15:13:36, 3] nsswitch/winbindd_pam.c:winbindd_dual_pam_auth_crap(1755) [ 4127]: pam auth crap domain: MYDOMAIN user: MYUSER [2009/08/20 15:13:36, 0] nsswitch/winbindd_pam.c:winbindd_dual_pam_auth_crap(1767) winbindd_pam_auth_crap: invalid password length 24/282 [2009/08/20 15:13:36, 2] nsswitch/winbindd_pam.c:winbindd_dual_pam_auth_crap(1931) NTLM CRAP authentication for user [MYDOMAIN]\[MYUSER] returned NT_STATUS_INVALID_PARAMETER (PAM: 4) [2009/08/20 15:13:36, 10] nsswitch/winbindd_cache.c:cache_store_response(2267) Storing response for pid 4547, len 3240

    Read the article

  • using Silverlight 3's HtmlPage.Window.Navigate method to reuse an already open browser window

    - by Phil
    Hi, I want to use an external browser window to implement a preview functionality in a silverlight application. There is a list of items and whenever the user clicks one of these items, it's opened in a separate browser window (the content is a pdf document, which is why it is handled ouside of the SL app). Now, to achieve this, I simply use HtmlPage.Window.Navigate(new Uri("http://www.bing.com")); which works fine. Now my client doesn't like the fact that every click opens up a new browser window. He would like to see the browser window reused every time an item is clicked. So I went out and tried implementing this: Option 1 - Use the overload of the Navigate method, like so: HtmlPage.Window.Navigate(new Uri("http://www.bing.com"), "foo"); I was assuming that the window would be reused when the same target parameter value (foo) would be used in subsequent calls. This does not work. I get a new window every time. Option 2 - Use the PopupWindow method on the HtmlPage HtmlPage.PopupWindow(new Uri("http://www.bing.com"), "blah", new HtmlPopupWindowOptions()); This does not work. I get a new window every time. Option 3 - Get a handle to the opened window and reuse that in subsequent calls private HtmlWindow window; private void navigationButton_Click(object sender, RoutedEventArgs e) { if (window == null) window = HtmlPage.Window.Navigate(new Uri("http://www.bing.com"), "blah"); else window.Navigate(new Uri("http://www.bing.com"), "blah"); if (window == null) MessageBox.Show("it's null"); } This does not work. I tried the same for the PopupWindow() method and the window is null every time, so a new window is opened on every click. I have checked both the EnableHtmlAccess and the IsPopupWindowAllowed properties, and they return true, as they should. Option 4 - Use Eval method to execute some custom javascript private const string javascript = @"var popup = window.open('', 'blah') ; if(popup.location != 'http://www.bing.com' ){ popup.location = 'http://www.bing.com'; } popup.focus();"; private void navigationButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.Eval(javascript); } This does not work. I get a new window every time. option 5 - Use CreateInstance to run some custom javascript on the page private void navigationButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.CreateInstance("thisIsPlainHell"); } and in my aspx I have function thisIsPlainHell() { var popup = window.open('http://www.bing.com', 'blah'); popup.focus(); } Guess what? This does work. The only thing is that the window behaves a little strange and I'm not sure why: I'm behind a proxy and in all other scenarios I'm being prompted for my password. In this case however I am not (and am thus not able to reach the external site - bing in this case). This is not really a huge issue atm, but I just don't understand what's goign on here. Whenever I type another url in the address bar of the popup window (eg www.google.com) and press enter, it opens up another window and prompts me for my proxy password. As a temporary solution option 5 could do, but I don't like the fact that Silverlight is not able to manage this. One of the main reasons my client has opted for Silverlight is to be protected against all the browser specific hacking that comes with javascript. Am I doing something wrong? I'm definitely no javascript expert, so I'm hoping it's something obvious I'm missing here. Cheers, Phil

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >