Search Results

Search found 1208 results on 49 pages for 'endpoint'.

Page 43/49 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Where is my app.config for SSIS?

    Sometimes when working with SSIS you need to add or change settings in the .NET application configuration file, which can be a bit confusing when you are building a SSIS package not an application. First of all lets review a couple of examples where you may need to do this. You are using referencing an assembly in a Script Task that uses Enterprise Library (aka EntLib), so you need to add the relevant configuration sections and settings, perhaps for the logging application block. You are using using Enterprise Library in a custom task or component, and again you need to add the relevant configuration sections and settings. You are using a web service with Microsoft Web Services Enhancements (WSE) 3.0 and hosting the proxy in SSIS, in an assembly used by your package, and need to add the configuration sections and settings. You need to change behaviours of the .NET framework which can be influenced by a configuration file, such as the System.Net.Mail default SMTP settings. Perhaps you wish to configure System.Net and the httpWebRequest header for parsing unsafe header (useUnsafeHeaderParsing), which will change the way the HTTP Connection manager behaves. You are consuming a WCF service and wish to specify the endpoint in configuration. There are no doubt plenty more examples but each of these requires us to identify the correct configuration file and and make the relevant changes. There are actually several configuration files, each used by a different execution host depending on how you are working with the SSIS package. The folders we need to look in will actually vary depending on the version of SQL Server as well as the processor architecture, but most are all what we can call the Binn folder. The SQL Server 2005 Binn folder is at C:\Program Files\Microsoft SQL Server\90\DTS\Binn\, compared to C:\Program Files\Microsoft SQL Server\100\DTS\Binn\ for SQL Server 2008. If you are on a 64-bit machine then you will see C:\Program Files (x86)\Microsoft SQL Server\90\DTS\Binn\ for the 32-bit executables and C:\Program Files\Microsoft SQL Server\90\DTS\Binn\ for 64-bit, so be sure to check all relevant locations. Of course SQL Server 2008 may have a C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\ on a 64-bit machine too. To recap, the version of SQL Server determines if you look in the 90 or 100 sub-folder under SQL Server in Program Files (C:\Program Files\Microsoft SQL Server\nn\) . If you are running a 64-bit operating system then you will have two instances program files, C:\Program Files (x86)\ for 32-bit and  C:\Program Files\ for 64-bit. You may wish to check both depending on what you are doing, but this is covered more under each section below. There are a total of five specific configuration files that you may need to change, each one is detailed below: DTExec.exe.config DTExec.exe is the standalone command line tool used for executing SSIS packages, and therefore it is an execution host with an app.config file. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTExec.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. DtsDebugHost.exe.config DtsDebugHost.exe is the execution host used by Business Intelligence Development Studio (BIDS) / Visual Studio when executing a package from the designer in debug mode, which is the default behaviour. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DtsDebugHost.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. This may surprise some people as Visual Studio is only 32-bit, but thankfully the debugger supports both. This can be set in the project properties, see the Run64BitRuntime property (true or false) in the Debugging pane of the Project Properties. dtshost.exe.config dtshost.exe is the execution host used by what I think of as the built-in features of SQL Server such as SQL Server Agent e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\dtshost.exe.config This file can be found in both the 32-bit and 64-bit Binn folders devenv.exe.config Something slightly different is devenv.exe which is Visual Studio. This configuration file may also need changing if you need a feature at design-time such as in a Task Editor or Connection Manager editor. Visual Studio 2005 for SQL Server 2005  - C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\devenv.exe.config Visual Studio 2008 for SQL Server 2008  - C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.config Visual Studio is only available for 32-bit so on a 64-bit machine you will have to look in C:\Program Files (x86)\ only. DTExecUI.exe.config The DTExec UI tool can also have a configuration file and these cab be found under the Tools folders for SQL Sever as shown below. C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe A configuration file may not exist, but if you can find the matching executable you know you are in the right place so can go ahead and add a new file yourself. In summary we have covered the assembly configuration files for all of the standard methods of building and running a SSIS package, but obviously if you are working programmatically you will need to make the relevant modifications to your program’s app.config as well.

    Read the article

  • Comments on Comments

    - by Joe Mayo
    I almost tweeted a reply to Capar Kleijne's question about comments on Twitter, but realized that my opinion exceeded 140 characters. The following is based upon my experience with extremes and approaches that I find useful in code comments. There are a couple extremes that I've seen and reasons why people go the distance in each approach. The most common extreme is no comments in the code at all.  A few bad reasons why this happens is because a developer is in a hurry, sloppy, or is interested in job preservation. The unfortunate result is that the code is difficult to understand and hard to maintain. The drawbacks to no comments in code are a primary reason why teachers drill the need for commenting code into our heads.  This viewpoint assumes the lack of comments are bad because the code is bad, but there is another reason for not commenting that is gaining more popularity. I've heard/and read that code should be self documenting. Following this thought pattern, if code is well written with meaningful names, there should not be a reason for comments.  An addendum to this argument is that comments are often neglected and get out-of-date, but the code is what is kept up-to-date. Presumably, if code contained very good naming, it would be easy to maintain.  This is a noble perspective and I like the practice of meaningful naming of identifiers. However, I think it's also an extreme approach that doesn't cover important cases.  i.e. If an identifier is named badly (subjective differences in opinion) or not changed appropriately during maintenance, then the badly named identifier is no more useful than a stale comment. These were the two no-comment extremes, so let's look at the too many comments extreme. On a regular basis, I'll see cases where the code is over-commented; not nearly as often as the no-comment scenarios, but still prevalent.  These are examples of where every single line in the code is commented.  These comments make the code harder to read because they get in the way of the algorithm.  In most cases, the comments parrot what each line of code does.  If a developer understands the language, then most statements are immediately intuitive.  i.e. what use is it to say that I'm assigning foo to bar when it's clear what the code is doing. I think that over-commenting code is a waste of time that slows down initial development and maintenance.  Understandably, the developer's intentions are admirable because they've had it beaten into their heads that they must comment. However, I think it's an extreme and prefer a more moderate approach. I don't think the extremes do justice to code because each can make maintenance harder.  No comments on bad code is obviously a problem, but the other two extremes are subtle and require qualification to address properly. The problem I see with the code-as-documentation approach is that it doesn't lift the developer out of the algorithm to identify dependencies, intentions, and hacks. Any developer can read code and follow an algorithm, but they still need to know where it fits into the big picture of the application. Because of indirections with language features like interfaces, delegates, and virtual members, code can become complex.  Occasionally, it's useful to point out a nuance or reason why a piece of code is there. i.e. If you've building an app that communicates via HTTP, you'll have certain headers to include for the endpoint, and it could be useful to point out why the code for setting those header values is there and how they affect the application. An argument against this could be that you should extract that code into a separate method with a meaningful name to describe the scenario.  My problem with such an approach would be that your code base becomes even more difficult to navigate and work with because you have all of this extra code just to make the code more meaningful. My opinion is that a simple and well-stated comment stating the reasons and intention for the code is more natural and convenient to the initial developer and maintainer.  I just don't agree with the approach of going out of the way to avoid making a comment.  I'm also concerned that some developers would take this approach as an excuse to not comment their bad code. Another area where I like comments is on documentation comments.  Java has it and so does C# and VB.  It's convenient because we can build automated tools that extract these comments.  These extracted comments are often much better than no documentation at all.  The "go read the code" answer always doesn't fulfill the need for a quick summary of an API. To summarize, I think that the extremes of no comments and too many comments are less than desirable approaches. I prefer documentation comments to explain each class and member (API level) and code comments as necessary to supplement well-written code. Joe

    Read the article

  • Programmatically use a server as the Build Server for multiple Project Collections

    Important: With this post you create an unsupported scenario by Microsoft. It will break your support for this server with Microsoft. So handle with care. I am the administrator an a TFS environment with a lot of Project Collections. In the supported configuration of Microsoft 2010 you need one Build Controller per Project Collection, and it is not supported to have multiple Build Controllers installed. Jim Lamb created a post how you can modify your system to change this behaviour. But since I have so many Project Collections, I automated this with the API of TFS. When you install a new build server via the UI, you do the following steps Register the build service (with this you hook the windows server into the build server environment) Add a new build controller Add a new build agent So in pseudo code, the code would look like foreach (projectCollection in GetAllProjectCollections) {       CreateNewWindowsService();       RegisterService();       AddNewController();       AddNewAgent(); } The following code fragements show you the most important parts of the method implementations. Attached is the full project. CreateNewWindowsService We create a new windows service with the SC command via the Diagnostics.Process class:             var pi = new ProcessStartInfo("sc.exe")                         {                             Arguments =                                 string.Format(                                     "create \"{0}\" start= auto binpath= \"C:\\Program Files\\Microsoft Team Foundation Server 2010\\Tools\\TfsBuildServiceHost.exe              /NamedInstance:{0}\" DisplayName= \"Visual Studio Team Foundation Build Service Host ({1})\"",                                     serviceHostName, tpcName)                         };            Process.Start(pi);             pi.Arguments = string.Format("failure {0} reset= 86400 actions= restart/60000", serviceHostName);            Process.Start(pi); RegisterService The trick in this method is that we set the NamedInstance static property. This property is Internal, so we need to set it through reflection. To get information on these you need nice Microsoft friends and the .Net reflector .             // Indicate which build service host instance we are using            typeof(BuildServiceHostUtilities).Assembly.GetType("Microsoft.TeamFoundation.Build.Config.BuildServiceHostProcess").InvokeMember("NamedInstance",              System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.SetProperty | System.Reflection.BindingFlags.Static, null, null, new object[] { serviceName });             // Create the build service host            serviceHost = buildServer.CreateBuildServiceHost(serviceName, endPoint);            serviceHost.Save();             // Register the build service host            BuildServiceHostUtilities.Register(serviceHost, user, password); AddNewController and AddNewAgent Once you have the BuildServerHost, the rest is pretty straightforward. There are methods on the BuildServerHost to modify the controllers and the agents                 controller = serviceHost.CreateBuildController(controllerName);                 agent = controller.ServiceHost.CreateBuildAgent(agentName, buildDirectory, controller);                controller.AddBuildAgent(agent); You have now seen the highlights of the application. If you need it and want to have sample information when you work in this area, download the app TFS2010_RegisterBuildServerToTPCs

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • JQuery + WCF + HTTP 404 Error

    - by hangar18
    HI All, I've searched high and low and finally decided to post a query here. I'm writing a very basic HTML page from which I'm trying to call a WCF service using jQuery and parse it using JSON. Service: IMyDemo.cs [ServiceContract] public interface IMyDemo { [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedRequest, ResponseFormat = WebMessageFormat.Json)] Employee DoWork(); [OperationContract] [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedRequest, ResponseFormat = WebMessageFormat.Json)] Employee GetEmp(int age, string name); } [DataContract] public class Employee { [DataMember] public int EmpId { get; set; } [DataMember] public string EmpName { get; set; } [DataMember] public int EmpSalary { get; set; } } MyDemo.svc.cs public Employee DoWork() { // Add your operation implementation here Employee obj = new Employee() { EmpSalary = 12, EmpName = "SomeName" }; return obj; } public Employee GetEmp(int age, string name) { Employee emp = new Employee(); if (age > 0) emp.EmpSalary = 12 + age; if (!string.IsNullOrEmpty(name)) emp.EmpName = "Server" + name; return emp; } WEb.Config <system.serviceModel> <services> <service behaviorConfiguration="EmployeesBehavior" name="MySample.MyDemo"> <endpoint address="" binding="webHttpBinding" contract="MySample.IMyDemo" behaviorConfiguration="EmployeesBehavior"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="EmployeesBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> <endpointBehaviors> <behavior name="EmployeesBehavior"> <webHttp/> </behavior> </endpointBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> MyDemo.htm <head> <title></title> <script type="text/javascript" language="javascript" src="Scripts/jquery-1.4.1.js"></script> <script type="text/javascript" language="javascript" src="Scripts/json.js"></script> <script type="text/javascript"> //create a global javascript object for the AJAX defaults. debugger; var ajaxDefaults = {}; ajaxDefaults.base = { type: "POST", timeout : 1000, dataFilter: function (data) { //see http://encosia.com/2009/06/29/never-worry-about-asp-net-ajaxs-d-again/ data = JSON.parse(data); //use the JSON2 library if you aren’t using FF3+, IE8, Safari 3/Google Chrome return data.hasOwnProperty("d") ? data.d : data; }, error: function (xhr) { //see if (!xhr) return; if (xhr.responseText) { var response = JSON.parse(xhr.responseText); //console.log works in FF + Firebug only, replace this code if (response) alert(response); else alert("Unknown server error"); } } }; ajaxDefaults.json = $.extend(ajaxDefaults.base, { //see http://encosia.com/2008/03/27/using-jquery-to-consume-aspnet-json-web-services/ contentType: "application/json; charset=utf-8", dataType: "json" }); var ops = { baseUrl: "/MyService/MySample/MyDemo.svc/", doWork: function () { //see http://api.jquery.com/jQuery.extend/ var ajaxOptions = $.extend(ajaxDefaults.json, { url: ops.baseUrl + "DoWork", data: "{}", success: function (msg) { console.log("success"); console.log(typeof msg); if (typeof msg !== "undefined") { console.log(msg); } } }); $.ajax(ajaxOptions); return false; }, getEmp: function () { var ajaxOpts = $.extend(ajaxDefaults.json, { url: ops.baseUrl + "GetEmp", data: JSON.stringify({ age: 12, name: "NameName" }), success: function (msg) { $("span#lbl").html("age: " + msg.Age + "name:" + msg.Name); } }); $.ajax(ajaxOpts); return false; } } </script> </head> <body> <span id="lbl">abc</span> <br /><br /> <input type="button" value="GetEmployee" id="btnGetEmployee" onclick="javascript:ops.getEmp();" /> </body> I'm just not able to get this running. When I debug, I see the error being returned from the call is " Server Error in '/jQuerySample' Application. <h2> <i>HTTP Error 404 - Not Found.</i> </h2></span> " Looks like I'm missing something basic here. My sample is based on this I've been trying to fix the code for sometime now so I'd like you to take a look and see if you can figure out what is it that I'm doing wrong here. I'm able to see that the service is created when I browse the service in IE. I've also tried changing the setting as mentioned here Appreciate your help. I'm gonna blog about this as soon as the issue is resolved for the benefit of other devs Thanks -Soni

    Read the article

  • ASP.NET WebAPI Security 4: Examples for various Authentication Scenarios

    - by Your DisplayName here!
    The Thinktecture.IdentityModel.Http repository includes a number of samples for the various authentication scenarios. All the clients follow a basic pattern: Acquire client credential (a single token, multiple tokens, username/password). Call Service. The service simply enumerates the claims it finds on the request and returns them to the client. I won’t show that part of the code, but rather focus on the step 1 and 2. Basic Authentication This is the most basic (pun inteneded) scenario. My library contains a class that can create the Basic Authentication header value. Simply set username and password and you are good to go. var client = new HttpClient { BaseAddress = _baseAddress }; client.DefaultRequestHeaders.Authorization = new BasicAuthenticationHeaderValue("alice", "alice"); var response = client.GetAsync("identity").Result; response.EnsureSuccessStatusCode();   SAML Authentication To integrate a Web API with an existing enterprise identity provider like ADFS, you can use SAML tokens. This is certainly not the most efficient way of calling a “lightweight service” ;) But very useful if that’s what it takes to get the job done. private static string GetIdentityToken() {     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         _idpEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;     var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         KeyType = KeyTypes.Bearer,         AppliesTo = new EndpointAddress(Constants.Realm)     };     var token = factory.CreateChannel().Issue(rst) as GenericXmlSecurityToken;     return token.TokenXml.OuterXml; } private static Identity CallService(string saml) {     var client = new HttpClient { BaseAddress = _baseAddress };     client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("SAML", saml);     var response = client.GetAsync("identity").Result;     response.EnsureSuccessStatusCode();     return response.Content.ReadAsAsync<Identity>().Result; }   SAML to SWT conversion using the Azure Access Control Service Another possible options for integrating SAML based identity providers is to use an intermediary service that allows converting the SAML token to the more compact SWT (Simple Web Token) format. This way you only need to roundtrip the SAML once and can use the SWT afterwards. The code for the conversion uses the ACS OAuth2 endpoint. The OAuth2Client class is part of my library. private static string GetServiceTokenOAuth2(string samlToken) {     var client = new OAuth2Client(_acsOAuth2Endpoint);     return client.RequestAccessTokenAssertion(         samlToken,         SecurityTokenTypes.Saml2TokenProfile11,         Constants.Realm).AccessToken; }   SWT Authentication When you have an identity provider that directly supports a (simple) web token, you can acquire the token directly without the conversion step. Thinktecture.IdentityServer e.g. supports the OAuth2 resource owner credential profile to issue SWT tokens. private static string GetIdentityToken() {     var client = new OAuth2Client(_oauth2Address);     var response = client.RequestAccessTokenUserName("bob", "abc!123", Constants.Realm);     return response.AccessToken; } private static Identity CallService(string swt) {     var client = new HttpClient { BaseAddress = _baseAddress };     client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", swt);     var response = client.GetAsync("identity").Result;     response.EnsureSuccessStatusCode();     return response.Content.ReadAsAsync<Identity>().Result; }   So you can see that it’s pretty straightforward to implement various authentication scenarios using WebAPI and my authentication library. Stay tuned for more client samples!

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 5: Service Client (more Flexibility with WSTrustChannelFactory)

    - by Your DisplayName here!
    See the previous posts first. WIF includes an API to manually request tokens from a token service. This gives you more control over the request and more flexibility since you can use your own token caching scheme instead of being bound to the channel object lifetime. The API is straightforward. You first request a token from the STS and then use that token to create a channel to the relying party service. I’d recommend using the WS-Trust bindings that ship with WIF to talk to ADFS 2 – they are pre-configured to match the binding configuration of the ADFS 2 endpoints. The following code requests a token for a WCF service from ADFS 2: private static SecurityToken GetToken() {     // Windows authentication over transport security     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         stsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcEndpoint),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } Afterwards, the returned token can be used to create a channel to the service. Again WIF has some helper methods here that make this very easy: private static void CallService(SecurityToken token) {     // create binding and turn off sessions     var binding = new WS2007FederationHttpBinding(         WSFederationHttpSecurityMode.TransportWithMessageCredential);     binding.Security.Message.EstablishSecurityContext = false;       // create factory and enable WIF plumbing     var factory = new ChannelFactory<IService>(binding, new EndpointAddress(svcEndpoint));     factory.ConfigureChannelFactory<IService>();       // turn off CardSpace - we already have the token     factory.Credentials.SupportInteractive = false;       var channel = factory.CreateChannelWithIssuedToken<IService>(token);       channel.GetClaims().ForEach(c =>         Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",             c.ClaimType,             c.Value,             c.Issuer,             c.OriginalIssuer)); } Why is this approach more flexible? Well – some don’t like the configuration voodoo. That’s a valid reason for using the manual approach. You also get more control over the token request itself since you have full control over the RST message that gets send to the STS. One common parameter that you may want to set yourself is the appliesTo value. When you use the automatic token support in the WCF federation binding, the appliesTo is always the physical service address. This means in turn that this address will be used as the audience URI value in the SAML token. Well – this in turn means that when you have an application that consists of multiple services, you always have to configure all physical endpoint URLs in ADFS 2 and in the WIF configuration of the service(s). Having control over the appliesTo allows you to use more symbolic realm names, e.g. the base address or a completely logical name. Since the URL is never de-referenced you have some degree of freedom here. In the next post we will look at the necessary code to request multiple tokens in a call chain. This is a common scenario when you first have to acquire a token from an identity provider and have to send that on to a federation gateway or Resource STS. Stay tuned.

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • Moving DataSets through BizTalk

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Yuck. But sometimes you have to, so here are a couple of things to bear in mind: Schemas Point a codegen tool at a WCF endpoint which exposes a DataSet and it will generate an XSD which describes the DataSet like this: <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:annotation>       <xs:appinfo>         <ActualTypeName="DataSet"                     Namespace="http://schemas.datacontract.org/2004/07/System.Data"                     xmlns="http://schemas.microsoft.com/2003/10/Serialization/" />       </xs:appinfo>     </xs:annotation>     <xs:sequence>       <xs:elementref="xs:schema" />       <xs:any />     </xs:sequence>  </xs:complexType> </xs:element>  In a serialized instance, the element of type xs:schema contains a full schema which describes the structure of the DataSet – tables, columns etc. The second element, of type xs:any, contains the actual content of the DataSet, expressed as DiffGrams: <GetDataSetResult>  <xs:schemaid="NewDataSet"xmlns:xs="http://www.w3.org/2001/XMLSchema"xmlns=""xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">     <xs:elementname="NewDataSet"msdata:IsDataSet="true"msdata:UseCurrentLocale="true">       <xs:complexType>         <xs:choiceminOccurs="0"maxOccurs="unbounded">           <xs:elementname="Table1">             <xs:complexType>               <xs:sequence>                 <xs:elementname="Id"type="xs:string"minOccurs="0" />                 <xs:elementname="Name"type="xs:string"minOccurs="0" />                 <xs:elementname="Date"type="xs:string"minOccurs="0" />               </xs:sequence>             </xs:complexType>           </xs:element>         </xs:choice>       </xs:complexType>     </xs:element>  </xs:schema>  <diffgr:diffgramxmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">     <NewDataSetxmlns="">       <Table1diffgr:id="Table11"msdata:rowOrder="0"diffgr:hasChanges="inserted">         <Id>377fdf8d-cfd1-4975-a167-2ddb41265def</Id>         <Name>157bc287-f09b-435f-a81f-2a3b23aff8c4</Name>         <Date>a5d78d83-6c9a-46ca-8277-f2be8d4658bf</Date>       </Table1>     </NewDataSet>  </diffgr:diffgram> </GetDataSetResult> Put the XSD into a BizTalk schema and it will fail to compile, giving you error: The 'http://www.w3.org/2001/XMLSchema:schema' element is not declared. You should be able to work around that, but I've had no luck in BizTalk Server 2006 R2 – instead you can safely change that xs:schema element to be another xs:any type: <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:sequence>       <xs:any />       <xs:any />     </xs:sequence>  </xs:complexType> </xs:element>  (This snippet omits the annotation, but you can leave it in the schema). For an XML instance to pass validation through the schema, you'll also need to flag the any attributes so they can contain any namespace and skip validation:  <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:sequence>       <xs:anynamespace="##any"processContents="skip" />       <xs:anynamespace="##any"processContents="skip" />     </xs:sequence>  </xs:complexType> </xs:element>  You should now have a compiling schema which can be successfully tested against a serialised DataSet. Transforms If you're mapping a DataSet element between schemas, you'll need to use the Mass Copy Functoid to populate the target node from the contents of both the xs:any type elements on the source node: This should give you a compiled map which you can test against a serialized instance. And if you have a .NET consumer on the other side of the mapped BizTalk output, it will correctly deserialize the response into a DataSet.

    Read the article

  • Application Composer Series: Where and When to use Groovy

    - by Richard Bingham
    This brief post is really intended as more of a reference than an article. The table below highlights two things, firstly where you can add you own custom logic via groovy code (end column), and secondly (middle column) when you might use each particular feature. Obviously this applies only where Application Composer exists, namely Fusion CRM and Oracle Sales Cloud, and is based on current (release 8) functionality. Feature Most Common Use Case Groovy Field Triggers React to run-time data changes. Only fired when the field is changed and upon submit. Y Object Triggers To extend the standard processing logic for an object, based on record creation, updates and deletes. There is a split between these firing events, with some related to UI/ADF actions and others originating in the database. UI Trigger Points: After Create - fires when a new object record is created. Commonly used to set default values for fields. Before Modify - Fires when the end-user tries to modify a field value. Could be used for generic warnings or extra security logic. Before Invalidate - Fires on the parent object when one of its child object records is created, updated, or deleted. For building in relationship logic. Before Remove - Fires when an attempt is made to delete an object record. Can be used to create conditions that prevent deletes. Database Trigger Points: Before Insert in Database - Fires before a new object is inserted into the database. Can be used to ensure a dependent record exists or check for duplicates. After Insert in Database - Fires after a new object is inserted into the database. Could be used to create a complementary record. Before Update in Database -Fires before an existing object is modified in the database. Could be used to check dependent record values. After Update in Database - Fires after an existing object is modified in the database. Could be used to update a complementary record. Before Delete in Database - Fires before an existing object is deleted from the database. Could be used to check dependent record values. After Delete in Database - Fires after an existing object is deleted from the database. Could be used to remove dependent records. After Commit in Database - Fires after the change pending for the current object (insert, update, delete) is made permanent in the current transaction. Could be used when committed data that has passed all validation is required. After Changes Posted to Database - Fires after all changes have been posted to the database, but before they are permanently committed. Could be used to make additional changes that will be saved as part of the current transaction. Y Field Validation Displays a user entered error message based groovy logic validating the field value. The message is shown only when the validation logic returns false, and the logic is triggered only when tabbing out of the field on the user interface. Y Object Validation Commonly used where validation is needed across multiple related fields on the object. Triggered on the submit UI action. Y Object Workflows All Object Workflows are fired upon either record creation or update, along with the option of adding a custom groovy firing condition. Y Field Updates - change another field when a specified one changes. Intended as an easy way to set different run-time values (e.g. pick values for LOV's) plus the value field permits groovy logic entry. Y E-Mail Notification - sends an email notification to specified users/roles. Templates support using run-time value tokens and rich text. N Task Creation - for adding standard tasks for use in the worklist functionality. N Outbound Message - will create and send an XML payload of the related object SDO to a specified endpoint. N Business Process Flow - intended for approval using the seeded process, however can also trigger custom BPMN flows. N Global Functions Utility functions that can be called from any groovy code in Application Composer (across applications). Y Object Functions Utility functions that are local to the parent object. Usually triggered from within 'Buttons and Actions' definitions in Application Composer, although can be called from other code for that object (e.g. from a trigger). Y Add Custom Fields When adding custom fields there are a few places you can include groovy logic. Y Default Value - to add logic within setting the default value when new records are entered. Y Conditionally Updateable - to add logic to set the field to read-only or not. Y Conditionally Required - to add logic to set the field to required or not. Y Formula Field - Used to provide a new aggregate field that is entirely based on groovy logic and other field values. Y Simplified UI Layouts - Advanced Expressions Used for creating dynamic layouts for simplified UI pages where fields and regions show/hide based on run-time context values and logic. Also includes support for the depends-on feature as a trigger. Y Related References This Blog: Application Composer Series Extending Sales Guide: Using Groovy Scripts Groovy Scripting Reference Guide

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • Wcf Facility Metadata publishing for this service is currently disabled.

    - by cvista
    Hey I'm trying to connect to my Wcf service which is configured using castles wcf facility. When I go to the service in a browser i get: Metadata publishing for this service is currently disabled. Which lists a load of instructions which i cant do because the configuration isnt in the web.config. when I try to connect using VS/add service reference i get: The HTML document does not contain Web service discovery information. Metadata contains a reference that cannot be resolved: 'http://s.ibzstar.com/userservices.svc'. Content Type application/soap+xml; charset=utf-8 was not supported by service http://s.ibzstar.com/userservices.svc. The client and service bindings may be mismatched. The remote server returned an error: (415) Cannot process the message because the content type 'application/soap+xml; charset=utf-8' was not the expected type 'text/xml; charset=utf-8'.. If the service is defined in the current solution, try building the solution and adding the service reference again. Anyone know what I need to do to get this working? The end client is an iPhone app written using Monotouch if that matters - so no castle windsor on the client side. cheers w:// Here's the Windsor.config from the service: <?xml version="1.0" encoding="utf-8" ?> <configuration> <components> <component id="eventServices" service="IbzStar.Domain.IEventServices, IbzStar.Domain" type="IbzStar.Domain.EventServices, IbzStar.Domain" lifestyle="transient"> </component> <component id="userServices" service="IbzStar.Domain.IUserServices, IbzStar.Domain" type="IbzStar.Domain.UserServices, IbzStar.Domain" lifestyle="transient"> </component> The Web.config section: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> <services> </services> <behaviors> <serviceBehaviors> <behavior name="IbzStar.WebServices.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> My App_Start contains this: Container = new WindsorContainer(new XmlInterpreter(new ConfigResource())) .AddFacility<WcfFacility>() .Install(Configuration.FromXmlFile("Windsor.config")); As for the client config - I'm using the wizard to add the service.

    Read the article

  • How to solve "The ChannelDispatcher is unable to open its IChannelListener" error?

    - by kyrisu
    Hi, I'm trying to communicate between WCF hosted in Windows Service and my service GUI. The problem is when I'm trying to execute OperationContract method I'm getting "The ChannelDispatcher at 'net.tcp://localhost:7771/MyService' with contract(s) '"IContract"' is unable to open its IChannelListener." My app.conf looks like that: <configuration> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcpBinding"> <security> <transport protectionLevel="EncryptAndSign" /> </security> </binding> </netTcpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="MyServiceBehavior"> <serviceMetadata httpGetEnabled="true" httpGetUrl="http://localhost:7772/MyService" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="MyServiceBehavior" name="MyService.Service"> <endpoint address="net.tcp://localhost:7771/MyService" binding="netTcpBinding" bindingConfiguration="netTcpBinding" name="netTcp" contract="MyService.IContract" /> </service> </services> </system.serviceModel> Port 7771 is listening (checked using netstat) and svcutil is able to generate configs for me. Any suggestions would be appreciated. Stack trace from exception Server stack trace: at System.ServiceModel.Channels.ServiceChannel.ThrowIfFaultUnderstood(Message reply, MessageFault fault, String action, MessageVersion version, FaultConverter faultConverter) at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) There's one inner exeption (but not under Exeption.InnerExeption but under Exeption.Detail.InnerExeption - ToString() method doesn't show that) A registration already exists for URI 'net.tcp://localhost:7771/MyService'. But my service have specified this URI only in app.config file nowhere else. In entire solution this URI apears in server once and client once.

    Read the article

  • WPF - Only want one expander open at a time in grouped Listbox

    - by Portsmouth
    I have a UserControl with a templated grouped listbox with expanders and only want one expander open at any time. I have browsed through the site but haven't found anything except binding the IsExpanded to IsSelected which isn't quite what I want. I am trying to put some code in the Expanded event that would loop through Expanders and close all the ones that aren't the expander passed in the Expanded event. I can't seem to figure out how to get at them. I've tried ListBox.Items.Groups but didn't see how to get at them and tried ListBox.ItemContainerGenerator.ContainerFromItem (or Index) but nothing came back. Thanks <ListBox Name="ListBox"> <ListBox.GroupStyle> <GroupStyle> <GroupStyle.ContainerStyle> <Style TargetType="{x:Type GroupItem}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type GroupItem}"> <Border BorderBrush="CadetBlue" BorderThickness="1"> <Expander BorderThickness="0,0,0,1" Expanded="Expander_Expanded" Focusable="False" IsExpanded="{Binding IsSelected, RelativeSource={RelativeSource FindAncestor, AncestorType= {x:Type ListBoxItem}}}" > <Expander.Header> <Grid> <StackPanel Height="30" Orientation="Horizontal"> <TextBlock Foreground="Navy" FontWeight="Bold" Text="{Binding Path=Name}" Margin="5,0,0,0" MinWidth="200" Padding="3" VerticalAlignment="Center" /> <TextBlock Foreground="Navy" FontWeight="Bold" Text=" Setups: " VerticalAlignment="Center" HorizontalAlignment="Right"/> <TextBlock Foreground="Navy" FontWeight="Bold" Text="{Binding Path=ItemCount}" VerticalAlignment="Center" HorizontalAlignment="Right" /> </StackPanel> </Grid> </Expander.Header> <Expander.Content> <Grid Background="white" > <ItemsPresenter /> </Grid> </Expander.Content> <Expander.Style > <Style TargetType="{x:Type Expander}"> <Style.Triggers> <Trigger Property="IsMouseOver" Value="true"> <Setter Property="Background"> <Setter.Value> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <GradientStop Color="WhiteSmoke" Offset="0.0" /> <GradientStop Color="Orange" Offset="1.0" /> </LinearGradientBrush> </Setter.Value> </Setter> </Trigger> <Trigger Property="IsMouseOver" Value="false" <Setter Property="Background"> <Setter.Value>

    Read the article

  • Get Local IP-Address using Boost.Asio

    - by MOnsDaR
    Hey, I'm currently searching for a portable way of getting the local IP-addresses. Because I'm using Boost anyway I thought it would be a good idea to use Boost.Asio for this task. There are serveral examples on the net which should do the trick. Examples: Official Boost.Asio Documentation Some Asian Page I tried both codes with just slight modifications. The Code on Boost.Doc was changed to not resolve "www.boost.org" but "localhost" or my hostname instead. For getting the hostname I used boost::asio::ip::host_name() or typed it directly as a string. Additionally I wrote my own code which was a merge of the above examples and my (little) knowledge I gathered from the Boost Documentation and other examples. All the sources worked, but they did just return the following IP: 127.0.1.1 (Thats not a typo, its .1.1 at the end) I run and compiled the code on Ubuntu 9.10 with GCC 4.4.1 A colleague tried the same code on his machine and got 127.0.0.2 (Not a typo too...) He compiled and run on Suse 11.0 with GCC 4.4.1 (I'm not 100% sure) I don't know if it is possible to change the localhost (127.0.0.1), but I know that neither me or my colleague did it. ifconfig says loopback uses 127.0.0.1. ifconfig also finds the public IP I am searching for (141.200.182.30 in my case, subnet is 255.255.0.0) So is this a Linux-issue and the code is not as portable as I thought? Do I have to change something else or is Boost.Asio not working as a solution for my problem at all? I know there are much questions about similar topics on Stackoverflow and other pages, but I cannot find information which is useful in my case. If you got useful links, it would be nice if you could point me to it. Thanks in advance, MOnsDaR PS: Here is the modified code I used from Boost.Doc: #include <boost/asio.hpp> using boost::asio::ip::tcp; boost::asio::io_service io_service; tcp::resolver resolver(io_service); tcp::resolver::query query(boost::asio::ip::host_name(), ""); tcp::resolver::iterator iter = resolver.resolve(query); tcp::resolver::iterator end; // End marker. while (iter != end) { tcp::endpoint ep = *iter++; std::cout << ep << std::endl; }

    Read the article

  • WCF custom certificate validation with BasicHttpBinding

    - by Sprklnh2o
    I have a WCF application hosted on IIS 6 that needs to Have 2-way SSL authentication Validate client certificate content with some client host information Validate client certificate is issued by the valid subCA. I was able to do 1) successfully. I am trying to achieve 2) and 3) by following this - basically creating a class that inherits X509CertificateValidator and overriding the Validate method with my own validation implementation(step 2 and 3). I followed the MSDN instructions exactly however, it seem that the Validate method is not being called. I purposely throw a SecurityAccessDeniedException in the overidden Validate method and no exception is thrown when I tried to access the service via my browser. I can still access my website with any client certificate. I also read this thread but it didn't really help. Any help would be greatly appreciated! Here's my configuration: <system.serviceModel> <services> <service behaviorConfiguration="SimpleServiceBehavior" name="SampleNameSpace.SampleClass"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="NewBinding0" contract="SampleNameSpace.ISampleClass" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="SimpleServiceBehavior"> <serviceMetadata httpsGetEnabled="true" policyVersion="Default" /> <serviceCredentials> <clientCertificate> <authentication certificateValidationMode="Custom" customCertificateValidatorType="SampleNameSpace.MyX509CertificateValidator, SampleAssembly"/> </clientCertificate> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <bindings> <basicHttpBinding> <binding name="NewBinding0"> <security mode="Transport"> <transport clientCredentialType="Certificate" /> </security> </binding> </basicHttpBinding> </bindings>

    Read the article

  • Why does Sharepoint 2010 Web Reference work, but Service Reference does not

    - by Darien Ford
    Sharepoint is setup to use NTLM authentication. When I reference http://myserver/Sites/Ops/_vti_bin/Lists.asmx?WSDL as a Web Reference, I can call the methods and get valid responses. When I reference the same url as a Service Reference, the server throws an exception when calling methods. My account is admin on the Sharepoint Farm. This is the app.config for the service reference (mostly auto generated): <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> </configSections> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="ListsSoap" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Ntlm" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://myserver/Sites/Ops/_vti_bin/Lists.asmx" binding="basicHttpBinding" bindingConfiguration="ListsSoap" contract="SharepointLists.ListsSoap" name="ListsSoap" /> </client> </system.serviceModel> </configuration> Saddly, the only information the exception provides is this: "Exception of type 'Microsoft.SharePoint.SoapServer.SoapServerException' was thrown." No other details. The code that I'm using is: public ListClass() { _Client = new SharepointLists.ListsSoapClient(); } public System.Xml.Linq.XElement GetTaskList() { return _Client.GetList("Tasks"); } Any thoughts? I would like to use the Service Reference rather than the Web Reference. UPDATE: I tried Rob's suggestion and got this error: HTTP GET Error URI: http://myserver/Sites/Ops/_vti_bin/Lists.asmx The document at the url http://myserver/Sites/Ops/_vti_bin/Lists.asmx was not recognized as a known document type. The error message from each known type may help you fix the problem: - Report from 'http://myserver/Sites/Ops/_vti_bin/Lists.asmx' is 'The document format is not recognized (the content type is 'text/html; charset=utf-8').'. - Report from 'DISCO Document' is 'There was an error downloading 'http://myserver/_vti_bin/Lists.asmx?disco'.'. - The request failed with HTTP status 404: Not Found. - Report from 'WSDL Document' is 'The document format is not recognized (the con tent type is 'text/html; charset=utf-8').'. - Report from 'XML Schema' is 'The document format is not recognized (the conten t type is 'text/html; charset=utf-8').'.

    Read the article

  • ADO.NET Data Services Entity Framework request error when property setter is internal

    - by Jim Straatman
    I receive an error message when exposing an ADO.NET Data Service using an Entity Framework data model that contains an entity (called "Case") with an internal setter on a property. If I modify the setter to be public (using the entity designer), the data services works fine. I don’t need the entity "Case" exposed in the data service, so I tried to limit which entities are exposed using SetEntitySetAccessRule. This didn’t work, and service end point fails with the same error. public static void InitializeService(IDataServiceConfiguration config) { config.SetEntitySetAccessRule("User", EntitySetRights.AllRead); } The error message is reported in a browser when the .svc endpoint is called. It is very generic, and reads “Request Error. The server encountered an error processing the request. See server logs for more details.” Unfortunately, there are no entries in the System and Application event logs. I found this stackoverflow question that shows how to configure tracing on the service. After doing so, the following NullReferenceExceptoin error was reported in the trace log. Does anyone know how to avoid this exception when including an entity with an internal setter? Blockquote 131076 3 0 2 MOTOJIM http://msdn.microsoft.com/en-US/library/System.ServiceModel.Diagnostics.TraceHandledException.aspx Handling an exception. 685a2910-19-128703978432492675 System.NullReferenceException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Object reference not set to an instance of an object. at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, MetadataWorkspace workspace, IDictionary2 entitySets, IDictionary2 knownTypes) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.PopulateMetadata() at System.Data.Services.DataService1.CreateProvider(Type dataServiceType, Object dataSourceInstance, DataServiceConfiguration&amp; configuration) at System.Data.Services.DataService1.EnsureProviderAndConfigForRequest() at System.Data.Services.DataService1.ProcessRequestForMessage(Stream messageBody) at SyncInvokeProcessRequestForMessage(Object , Object[] , Object[] ) at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]&amp; outputs) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet) </StackTrace> <ExceptionString>System.NullReferenceException: Object reference not set to an instance of an object. at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, MetadataWorkspace workspace, IDictionary2 entitySets, IDictionary2 knownTypes) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.P

    Read the article

  • oauth_verifier is not passed using DotNetOpenAuth's Webconsumer

    - by BozoJoe
    I receive back a good oauth_verifier value from the server, but it is not being passed on via the ProcessUserAuthorization call to the access_token endpoint. I'm using DotNetOpenAuth 3.3.1, and the WebConsumer implementation. The server I'm working with is using OAuth 1.0a not 1.0.1. Do I need to force DotNetOpenAuth to use 1.0a? 2010-01-16 13:19:44,343 [5] DEBUG DotNetOpenAuth.Messaging.Channel [(null)] <(null)> - After binding element processing, the received UserAuthorizationResponse (1.0.1) message is: oauth_verifier: dEz9lE9AA1gcdr6oCbmD oauth_token: vauHNVOCITlbGCuqycWn 2010-01-16 13:19:44,346 [5] DEBUG DotNetOpenAuth.Messaging.Channel [(null)] <(null)> - Preparing to send AuthorizedTokenRequest (1.0) message. 2010-01-16 13:19:44,346 [5] DEBUG DotNetOpenAuth.Messaging.Bindings [(null)] <(null)> - Binding element DotNetOpenAuth.OAuth.ChannelElements.OAuthHttpMethodBindingElement applied to message. 2010-01-16 13:19:44,346 [5] DEBUG DotNetOpenAuth.Messaging.Bindings [(null)] <(null)> - Binding element DotNetOpenAuth.Messaging.Bindings.StandardReplayProtectionBindingElement applied to message. 2010-01-16 13:19:44,346 [5] DEBUG DotNetOpenAuth.Messaging.Bindings [(null)] <(null)> - Binding element DotNetOpenAuth.Messaging.Bindings.StandardExpirationBindingElement applied to message. 2010-01-16 13:19:44,346 [5] DEBUG DotNetOpenAuth.Messaging.Channel [(null)] <(null)> - Applying secrets to message to prepare for signing or signature verification. 2010-01-16 13:19:44,348 [5] DEBUG DotNetOpenAuth.Messaging.Bindings [(null)] <(null)> - Signing AuthorizedTokenRequest message using HMAC-SHA1. 2010-01-16 13:19:44,349 [5] DEBUG DotNetOpenAuth.Messaging.Bindings [(null)] <(null)> - Constructed signature base string: GET&http%3A%2F%2Fx-staging.indivo.org%3A8000%2Foauth%2Faccess_token&oauth_consumer_key%3Doak%26oauth_nonce%3DgPersiZV%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1263676784%26oauth_token%3DvauHNVOCITlbGCuqycWn%26oauth_version%3D1.0 2010-01-16 13:19:44,349 [5] DEBUG DotNetOpenAuth.Messaging.Bindings [(null)] <(null)> - Binding element DotNetOpenAuth.OAuth.ChannelElements.SigningBindingElementChain applied to message. 2010-01-16 13:19:44,351 [5] INFO DotNetOpenAuth.Messaging.Channel [(null)] <(null)> - Prepared outgoing AuthorizedTokenRequest (1.0) message for http://x-staging.indivo.org:8000/oauth/access_token: oauth_token: vauHNVOCITlbGCuqycWn oauth_consumer_key: XXXXXXmyComsumerKeyXXXXXX oauth_nonce: gPersiZV oauth_signature_method: HMAC-SHA1 oauth_signature: xNynvr2oFlqtdoOKOl2ETiiTLGY= oauth_version: 1.0 oauth_timestamp: 1263676784 2010-01-16 13:19:44,351 [5] DEBUG DotNetOpenAuth.Messaging.Channel [(null)] <(null)> - Sending AuthorizedTokenRequest request. 2010-01-16 13:19:44,351 [5] DEBUG DotNetOpenAuth.Http [(null)] <(null)> - HTTP GET http://x-staging.indivo.org:8000/oauth/access_token 2010-01-16 13:20:34,657 [5] ERROR DotNetOpenAuth.Http [(null)] <(null)> - WebException from http://x-staging.indivo.org:8000/oauth/access_token: <h4>Internal Server Error</h4> A pastebin link to the log4net log

    Read the article

  • Silverlight 4 Setting background color of textbox on focus

    - by Sean Riedel
    I am trying to set the background color of a Textbox to white on focus using a Style. My enabled Textbox has a Linear Gradient background by default: <Setter Property="Background"> <Setter.Value> <LinearGradientBrush StartPoint="0.5,0" EndPoint="0.5,1"> <GradientStop Color="#e8e8e8" Offset="0.0" /> <GradientStop Color="#f3f3f3" Offset="0.25" /> <GradientStop Color="#f4f4f4" Offset="0.75" /> <GradientStop Color="#f4f4f4" Offset="1.0" /> </LinearGradientBrush> </Setter.Value> </Setter> I have a focus visual state defined: <VisualStateGroup x:Name="FocusStates"> <VisualState x:Name="Focused"> <Storyboard> <DoubleAnimation Storyboard.TargetName="FocusVisualElement" Storyboard.TargetProperty="Opacity" To="1" Duration="0"/> </Storyboard> </VisualState> Here is the rest of the Control Template: <Border x:Name="Border" BorderThickness="{TemplateBinding BorderThickness}" CornerRadius="1" Opacity="1" Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}"> <Grid> <Border x:Name="ReadOnlyVisualElement" Opacity="0" Background="#5EC9C9C9"/> <Border x:Name="MouseOverBorder" BorderThickness="1" BorderBrush="Transparent"> <ScrollViewer x:Name="ContentElement" Padding="{TemplateBinding Padding}" BorderThickness="0" IsTabStop="False"/> </Border> </Grid> </Border> <Border x:Name="DisabledVisualElement" Background="#A5D7D7D7" BorderBrush="#A5D7D7D7" BorderThickness="{TemplateBinding BorderThickness}" Opacity="0" IsHitTestVisible="False"/> <Border x:Name="FocusVisualElement" Background="#A5ffffff" BorderBrush="#FF72c1ec" BorderThickness="{TemplateBinding BorderThickness}" Margin="1" Opacity="0" IsHitTestVisible="False"/> On the last Border tag is where I am trying to set the background to white (#ffffff). Right now I have the opacity at A5 which does turn the background white, however it also starts to obscure the text in the box. Any higher opacity makes the text invisible so I am pretty sure that setting the background of that border is not the right way to do this. Can I set the Background color of the ContentElement somehow through a StoryBoard? Thanks.

    Read the article

  • WCF Timeout issue - should there even be a socket connection?

    - by stiank81
    I have a .Net application which is split into a client and server side. The communication between them is handled using WCF. I'm not using the automagic service references, but instead I've built the connection manually like described in the Screencast by Miguel Castro. Summarized this means that I create a console application on the server side that holds ServiceHost objects for the different services: var myServiceHost = new System.ServiceModel.ServiceHost(typeof(MyService), new Uri("net.tcp://localhost:8002")); myServiceHost.Open(); And on the client side I have service proxies creating channels using the ChannelFactory: IMyService proxy = new ChannelFactory<IMyService>("MyServiceEndpoint").CreateChannel(); The client and server side share the service contract defined in the interface IMyService. And another advantage is that I get minimal App.config files - without all the autogenerated stuff created through the Service References. Example from client side: <?xml version="1.0"?> <configuration> <system.serviceModel> <client> <endpoint address="net.tcp://localhost:8002/MyEndpoint" binding="netTcpBinding" contract="IMyService" name="MyServiceEndpoint"/> </client> </system.serviceModel> </configuration> So - to my problem. I create the proxy once, and it holds a channel all the way through the application. However, if I leave the application without use for a few minutes the channel has timed out, and I get the following exception: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:00:59.9979998'. How do I prevent this? I'm assuming I need to specify a higher timeout in my configuration? But I don't want it to ever time out. But on the other hand - I don't want a socket connection! Do I need one? Thought I could go connection less with WCF... What's the permanent solution and best practice on solving this? Set timeout to "never".. Create a new channel for each request? I'm assuming there is some overhead creating the channel?.. Increase the timeout to e.g. 5minutes and create new channel if the connection did timeout? Make it connection less somehow? (Without the overhead of creating channels..) Something else...

    Read the article

  • Certificates Validations Issues

    - by user298331
    Hi All, i am facing some issues related certificates.i need some help to resolve these issues. Requirements : security mode="TransportWithMessageCredential" binding binding name="basicHttpEndpointBinding" certificateValidationMode ="ChainTrust" revocationMode="Online" Certificates : Service Cerificates : Transportlevel : XXXX.cer my cerificate name is my system DNS name and it is having root node i.e RootTrnCA.cer this is used to enable https.but am not validationg transport level certificates. Message Level : services.ca.iim (VXXXX.Cer--Act.Mac.Ca--services.ca.iim ) Client Cerificates : Transportlevel : ZZZZ.cer my cerificate name is my system DNS name and it is having root node i.e RootTrnCA.cer ignoring transport certificate errors through coading..... Message Level : client.ca.iim (VXXXX.Cer--Act.Mac.Ca--client.ca.iim ) Issues : 1) Response message is not contain Service certificate Signature in Soap header.so i am not able to validate Server certificate details in Client code. 2)if i use the transport with message credential and Chaintrust.i am getting error : The revocation function was unable to check revocation because the revocation server was offline.) so please very the below service and cleint config and correct me if i am wrong. Service config : Client config : i am attaching certificate through coading : objProxy.ChannelFactory.Credentials.ClientCertificate.SetCertificate(System.Security.Cryptography.X509Certificates. StoreLocation.LocalMachine, System.Security.Cryptography.X509Certificates. StoreName.My, X509FindType.FindBySubjectName, "client.ca.iim"); <binding name="XXXXXServiceHost.Http" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="Certificate" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="https://XXXXXX/XXXServiceHost/MemberSvc.svc/soap11" binding="basicHttpBinding" bindingConfiguration="XXXServiceHost.Http" contract="ServiceReference1.IMemberIBA" name="XXXServiceHost.Http" /> </client> </system.serviceModel>Please Verify both and Help me how to resolve above two issues . Thanks Babu

    Read the article

  • jquery and requirejs and knockout; reference requirejs object from within itself

    - by Thomas
    We use jquery and requirejs to create a 'viewmodel' like this: define('vm.inkoopfactuurAanleveren', ['jquery', 'underscore', 'ko', 'datacontext', 'router', 'messenger', 'config', 'store'], function ($, _, ko, datacontext, router, messenger, config, store) { var isBusy = false, isRefreshing = false, inkoopFactuur = { factuurNummer: ko.observable("AAA") }, activate = function (routeData, callback) { messenger.publish.viewModelActivated({ canleaveCallback: canLeave }); getNewInkoopFactuurAanleveren(callback); var restricteduploader = new qq.FineUploader({ element: $('#restricted-fine-uploader')[0], request: { endpoint: 'api/InkoopFactuurAanleveren', forceMultipart: true }, multiple: false, failedUploadTextDisplay: { mode: 'custom', maxChars: 250, responseProperty: 'error', enableTooltip: true }, text: { uploadButton: 'Click or Drop' }, showMessage: function (message) { $('#restricted-fine-uploader').append('<div class="alert alert-error">' + message + '</div>'); }, debug: true, callbacks: { onComplete: function (id, fileName, responseJSON) { var response = responseJSON; }, } }); }, invokeFunctionIfExists = function (callback) { if (_.isFunction(callback)) { callback(); } }, loaded = function (factuur) { inkoopFactuur = factuur; var ids = config.viewIds; ko.applyBindings(this, getView(ids.inkoopfactuurAanleveren)); /*<----- THIS = OUT OF SCOPE!*/ / }, bind = function () { }, saved = function (success) { var s = success; }, saveCmd = ko.asyncCommand({ execute: function (complete) { $.when(datacontext.saveNewInkoopFactuurAanleveren(inkoopFactuur)) .then(saved).always(complete); return; }, canExecute: function (isExecuting) { return true; } }), getView = function (viewName) { return $(viewName).get(0); }, getNewInkoopFactuurAanleveren = function (callback) { if (!isRefreshing) { isRefreshing = true; $.when(datacontext.getNewInkoopFactuurAanleveren(dataOptions(true))).then(loaded).always(invokeFunctionIfExists(callback)); isRefreshing = false; } }, dataOptions = function (force) { return { results: inkoopFactuur, // filter: sessionFilter, //sortFunction: sort.sessionSort, forceRefresh: force }; }, canLeave = function () { return true; }, forceRefreshCmd = ko.asyncCommand({ execute: function (complete) { //$.when(datacontext.sessions.getSessionsAndAttendance(dataOptions(true))) // .always(complete); complete; } }), init = function () { // activate(); // Bind jQuery delegated events //eventDelegates.sessionsListItem(gotoDetails); //eventDelegates.sessionsFavorite(saveFavorite); // Subscribe to specific changes of observables //addFilterSubscriptions(); }; init(); return { activate: activate, canLeave: canLeave, inkoopFactuur: inkoopFactuur, saveCmd: saveCmd, forceRefreshCmd: forceRefreshCmd, bind: bind, invokeFunctionIfExists: invokeFunctionIfExists }; }); On the line ko.applyBindings(this, getView(ids.inkoopfactuurAanleveren)); in the 'loaded' method the 'this' keyword doens't refer to the 'viewmodel' object. the 'self' keyword seems to refer to a combination on methods found over multiple 'viewmodels'. The saveCmd property is bound through knockout, but gives an error since it cannot be found. How can the ko.applyBindings get the right reference to the viewmodel? In other words, with what do we need to replace the 'this' keyword int he applyBindings. I would imagine you can 'ask' requirejs to give us the ealiers instantiated object with identifier 'vm.inkoopfactuurAanleveren' but I cannot figure out how.

    Read the article

  • the HTTP request is unauthorized with client authentication scheme 'Anonymous'

    - by user1246429
    I use firstdata webservice API.I use C# client call firstdata webservice API with WCF. But show error message: "But show error message "System.ServiceModel.Security.MessageSecurityException: The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was ''. --- System.Net.WebException: The remote server returned an error: (401) Unauthorized. at System.Net.HttpWebRequest.GetResponse() at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) --- End of inner exception stack trace --- Server stack trace: at System.ServiceModel.Channels.HttpChannelUtilities.ValidateAuthentication(HttpWebRequest request, HttpWebResponse response, WebException responseException, HttpChannelFactory factory) at System.ServiceModel.Channels.HttpChannelUtilities.ValidateRequestReplyResponse(HttpWebRequest request, HttpWebResponse response, HttpChannelFactory factory, WebException responseException, ChannelBinding channelBinding) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at com.firstdata.globalgatewaye4.api.ServiceSoap.SendAndCommit(SendAndCommitRequest request) at com.firstdata.globalgatewaye4.api.ServiceSoapClient.com.firstdata.globalgatewaye4.api.ServiceSoap.SendAndCommit (SendAndCommitRequest request) at com.firstdata.globalgatewaye4.api.ServiceSoapClient.SendAndCommit(Transaction SendAndCommitSource)" My web.config info below: <behaviors> <endpointBehaviors> <behavior name="FDGGBehavior"> <clientCredentials> <clientCertificate findValue="WS1909642825._.1" storeLocation="LocalMachine" x509FindType="FindBySubjectName" storeName="TrustedPeople" /> <serviceCertificate> <authentication certificateValidationMode="PeerTrust" /> </serviceCertificate> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> <binding name="ServiceSoap" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="Certificate" proxyCredentialType="Ntlm" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> <endpoint address="https://api.globalgatewaye4.firstdata.com/transaction/v11" binding="basicHttpBinding" bindingConfiguration="ServiceSoap" contract="com.firstdata.globalgatewaye4.api.ServiceSoap" name="ServiceSoap" behaviorConfiguration="FDGGBehavior" /> How can resolve question?

    Read the article

  • Problem in running boost eample blocking_udp_echo_client on MacOSX

    - by n179911
    I am trying to run blocking_udp_echo_client on MacOS X http://www.boost.org/doc/libs/1_35_0/doc/html/boost_asio/example/echo/blocking_udp_echo_client.cpp I run it with argument 'localhost 9000' But the program crashes and this is the line in the source which crashes: `udp::socket s(io_service, udp::endpoint(udp::v4(), 0));' this is the stack trace: #0 0x918c3e42 in __kill #1 0x918c3e34 in kill$UNIX2003 #2 0x9193623a in raise #3 0x91942679 in abort #4 0x940d96f9 in __gnu_debug::_Error_formatter::_M_error #5 0x0000e76e in __gnu_debug::_Safe_iterator::op_base* , __gnu_debug_def::list::op_base*, std::allocator::op_base* ::_Safe_iterator at safe_iterator.h:124 #6 0x00014729 in boost::asio::detail::hash_map::op_base*::bucket_type::bucket_type at hash_map.hpp:277 #7 0x00019e97 in std::_Construct::op_base*::bucket_type, boost::asio::detail::hash_map::op_base*::bucket_type at stl_construct.h:81 #8 0x0001a457 in std::__uninitialized_fill_n_aux::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:194 #9 0x0001a4e1 in std::uninitialized_fill_n::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:218 #10 0x0001a509 in std::__uninitialized_fill_n_a::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:310 #11 0x0001aa34 in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::_M_fill_insert at vector.tcc:365 #12 0x0001acda in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::insert at stl_vector.h:658 #13 0x0001ad81 in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::resize at stl_vector.h:427 #14 0x0001ae3a in __gnu_debug_def::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::resize at vector:169 #15 0x0001b7be in boost::asio::detail::hash_map::op_base*::rehash at hash_map.hpp:221 #16 0x0001bbeb in boost::asio::detail::hash_map::op_base*::hash_map at hash_map.hpp:67 #17 0x0001bc74 in boost::asio::detail::reactor_op_queue::reactor_op_queue at reactor_op_queue.hpp:42 #18 0x0001bd24 in boost::asio::detail::kqueue_reactor::kqueue_reactor at kqueue_reactor.hpp:86 #19 0x0001c000 in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #20 0x0001c14d in boost::asio::use_service at io_service.ipp:195 #21 0x0001c26d in boost::asio::detail::reactive_socket_service ::reactive_socket_service at reactive_socket_service.hpp:111 #22 0x0001c344 in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #23 0x0001c491 in boost::asio::use_service at io_service.ipp:195 #24 0x0001c4d5 in boost::asio::datagram_socket_service::datagram_socket_service at datagram_socket_service.hpp:95 #25 0x0001c59e in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #26 0x0001c6eb in boost::asio::use_service at io_service.ipp:195 #27 0x0001c711 in boost::asio::basic_io_object ::basic_io_object at basic_io_object.hpp:72 #28 0x0001c783 in boost::asio::basic_socket ::basic_socket at basic_socket.hpp:108 #29 0x0001c865 in boost::asio::basic_datagram_socket ::basic_datagram_socket at basic_datagram_socket.hpp:107 #30 0x000027bc in main at main.cpp:32 This is the gdb output: (gdb) continue /Developer/SDKs/MacOSX10.5.sdk/usr/include/c++/4.0.0/debug/safe_iterator.h:127: error: attempt to copy-construct an iterator from a singular iterator. Objects involved in the operation: iterator "this" @ 0x0x100420 { type = N11__gnu_debug14_Safe_iteratorIN10__gnu_norm14_List_iteratorISt4pairIiPN5boost4asio6detail16reactor_op_queueIiE7op_baseEEEEN15__gnu_debug_def4listISB_SaISB_EEEEE (mutable iterator); state = singular; } iterator "other" @ 0x0xbfffe8a4 { type = N11__gnu_debug14_Safe_iteratorIN10__gnu_norm14_List_iteratorISt4pairIiPN5boost4asio6detail16reactor_op_queueIiE7op_baseEEEEN15__gnu_debug_def4listISB_SaISB_EEEEE (mutable iterator); state = singular; } Program received signal: “SIGABRT”. (gdb) continue Program received signal: “?”. Does someone has any idea why this example does not work on mac osx? Thank you.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49  | Next Page >